modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-16 06:27:54
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
522 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-16 06:27:41
card
stringlengths
11
1.01M
nakcnx/OTG-Math-680
nakcnx
2023-03-25T21:34:48Z
7
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "openthaigpt", "th", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-03-25T11:43:19Z
--- license: apache-2.0 language: - th pipeline_tag: text-generation library_name: transformers tags: - openthaigpt widget: - text: "คำถาม: ซื้อมะม่วงมา 30ลูก ระหว่างกลับบ้านหล่นไป 15ลูก เลยจอดรถเก็บมาได้ 5ลูก ให้เพื่อนไป 7ผล ปลอกกินไป 2ลูก เน่าไปอีก 3ลูก ของเก่าอยู่ในตู้เย็นอีก 3ลูก จะมีมะม่วงเท่าไร" - text: "คำถาม: จงหาเลขคู่ที่มากกว่าหรือเท่ากับ 10 แต่น้อยกว่าหรือเท่ากับ 20" - text: "คำถาม: จงหาค่า x ในสมการ (x/2) + 7 = 10" - text: "คำถาม: ถ้ามีเพื่อนซื้อเนื้อวัวไป 5 กิโลกรัม และจ่ายราคา 300 บาทต่อกิโลกรัม จะต้องจ่ายเงินรวมทั้งสิ้นเท่าไร" - text: "คำถาม: ถ้ามีสามเหลี่ยมที่มีด้านเท่ากันทั้งหมด ความยาวด้านของแต่ละด้านคือ 4 เซนติเมตร จงหาพื้นที่ของสามเหลี่ยมนั้น" --- # OTG-Math-680 This model is fine-tuned version of [Open Thai GPT](https://huggingface.co/kobkrit/openthaigpt-gpt2-pantipwiki-poc-0.0.1) with Thai Math QA 680 pairs dataset (GSM8K, GPT-3.5 Generated, Chain of Thought).
Absie/a2c-AntBulletEnv-v0
Absie
2023-03-25T21:15:05Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T21:13:50Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1760.70 +/- 86.57 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
cleanrl/MontezumaRevenge-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T21:10:05Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "MontezumaRevenge-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T21:10:03Z
--- tags: - MontezumaRevenge-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: MontezumaRevenge-v5 type: MontezumaRevenge-v5 metrics: - type: mean_reward value: 0.00 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **MontezumaRevenge-v5** This is a trained model of a PPO agent playing MontezumaRevenge-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id MontezumaRevenge-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id MontezumaRevenge-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'MontezumaRevenge-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
aimarsg/prueba3
aimarsg
2023-03-25T20:29:47Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-03-25T19:54:16Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: prueba3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # prueba3 This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2158 - Precision: 0.7162 - Recall: 0.6335 - F1: 0.6723 - Accuracy: 0.9737 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.75e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 29 | 0.2562 | 0.7732 | 0.5976 | 0.6742 | 0.9719 | | No log | 2.0 | 58 | 0.2526 | 0.705 | 0.5618 | 0.6253 | 0.9704 | | No log | 3.0 | 87 | 0.2187 | 0.6833 | 0.6534 | 0.6680 | 0.9705 | | No log | 4.0 | 116 | 0.2205 | 0.6583 | 0.6295 | 0.6436 | 0.9715 | | No log | 5.0 | 145 | 0.2161 | 0.7162 | 0.6534 | 0.6833 | 0.9712 | | No log | 6.0 | 174 | 0.2293 | 0.6977 | 0.5976 | 0.6438 | 0.9722 | | No log | 7.0 | 203 | 0.2207 | 0.6972 | 0.6056 | 0.6482 | 0.9724 | | No log | 8.0 | 232 | 0.2343 | 0.6781 | 0.6295 | 0.6529 | 0.9707 | | No log | 9.0 | 261 | 0.2212 | 0.7115 | 0.5896 | 0.6449 | 0.9730 | | No log | 10.0 | 290 | 0.2171 | 0.7260 | 0.6016 | 0.6580 | 0.9734 | | No log | 11.0 | 319 | 0.2191 | 0.6851 | 0.6414 | 0.6626 | 0.9725 | | No log | 12.0 | 348 | 0.2101 | 0.7056 | 0.6494 | 0.6763 | 0.9740 | | No log | 13.0 | 377 | 0.2227 | 0.7240 | 0.6375 | 0.6780 | 0.9732 | | No log | 14.0 | 406 | 0.2226 | 0.7442 | 0.6375 | 0.6867 | 0.9739 | | No log | 15.0 | 435 | 0.2247 | 0.7339 | 0.6375 | 0.6823 | 0.9739 | | No log | 16.0 | 464 | 0.2167 | 0.6983 | 0.6454 | 0.6708 | 0.9729 | | No log | 17.0 | 493 | 0.2220 | 0.7281 | 0.6295 | 0.6752 | 0.9732 | | 0.0005 | 18.0 | 522 | 0.2294 | 0.7299 | 0.6135 | 0.6667 | 0.9725 | | 0.0005 | 19.0 | 551 | 0.2104 | 0.6949 | 0.6534 | 0.6735 | 0.9722 | | 0.0005 | 20.0 | 580 | 0.2103 | 0.7240 | 0.6375 | 0.6780 | 0.9730 | | 0.0005 | 21.0 | 609 | 0.2092 | 0.7137 | 0.6454 | 0.6778 | 0.9735 | | 0.0005 | 22.0 | 638 | 0.2091 | 0.7181 | 0.6494 | 0.6820 | 0.9737 | | 0.0005 | 23.0 | 667 | 0.2081 | 0.7162 | 0.6534 | 0.6833 | 0.9735 | | 0.0005 | 24.0 | 696 | 0.2198 | 0.7264 | 0.6135 | 0.6652 | 0.9722 | | 0.0005 | 25.0 | 725 | 0.2206 | 0.7290 | 0.6215 | 0.6710 | 0.9725 | | 0.0005 | 26.0 | 754 | 0.2194 | 0.7256 | 0.6215 | 0.6695 | 0.9735 | | 0.0005 | 27.0 | 783 | 0.2220 | 0.7290 | 0.6215 | 0.6710 | 0.9739 | | 0.0005 | 28.0 | 812 | 0.2230 | 0.7290 | 0.6215 | 0.6710 | 0.9735 | | 0.0005 | 29.0 | 841 | 0.2163 | 0.7182 | 0.6295 | 0.6709 | 0.9737 | | 0.0005 | 30.0 | 870 | 0.2158 | 0.7162 | 0.6335 | 0.6723 | 0.9737 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
impira/layoutlm-invoices
impira
2023-03-25T20:21:25Z
7,780
182
transformers
[ "transformers", "pytorch", "safetensors", "layoutlm", "document-question-answering", "pdf", "invoices", "en", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
document-question-answering
2022-09-06T17:49:13Z
--- language: en license: cc-by-nc-sa-4.0 pipeline_tag: document-question-answering tags: - layoutlm - document-question-answering - pdf - invoices widget: - text: "What is the invoice number?" src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png" - text: "What is the purchase amount?" src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg" --- # LayoutLM for Invoices This is a fine-tuned version of the multi-modal [LayoutLM](https://aka.ms/layoutlm) model for the task of question answering on invoices and other documents. It has been fine-tuned on a proprietary dataset of invoices as well as both [SQuAD2.0](https://huggingface.co/datasets/squad_v2) and [DocVQA](https://www.docvqa.org/) for general comprehension. ## Non-consecutive tokens Unlike other QA models, which can only extract consecutive tokens (because they predict the start and end of a sequence), this model can predict longer-range, non-consecutive sequences with an additional classifier head. For example, QA models often encounter this failure mode: ### Before ![Broken Address](./before.png) ### After However this model is able to predict non-consecutive tokens and therefore the address correctly: ![Two-line Address](./after.png) ## Getting started with the model The best way to use this model is via [DocQuery](https://github.com/impira/docquery). ## About us This model was created by the team at [Impira](https://www.impira.com/).
emmuzoo/ppo-SnowballTarget
emmuzoo
2023-03-25T20:20:22Z
20
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-03-25T20:20:17Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: emmuzoo/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
naeisher/a2c-AntBulletEnv-v0
naeisher
2023-03-25T19:59:31Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T19:58:21Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 851.30 +/- 36.85 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
aimarsg/prueba2
aimarsg
2023-03-25T19:45:42Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-03-25T18:29:09Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: prueba2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # prueba2 This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1829 - Precision: 0.7232 - Recall: 0.6454 - F1: 0.6821 - Accuracy: 0.9744 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 32 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 29 | 0.1726 | 0.7014 | 0.5896 | 0.6407 | 0.9720 | | No log | 2.0 | 58 | 0.1712 | 0.6090 | 0.6454 | 0.6267 | 0.9679 | | No log | 3.0 | 87 | 0.1665 | 0.6746 | 0.6773 | 0.6759 | 0.9720 | | No log | 4.0 | 116 | 0.1945 | 0.7042 | 0.5976 | 0.6466 | 0.9719 | | No log | 5.0 | 145 | 0.1850 | 0.6927 | 0.6016 | 0.6439 | 0.9724 | | No log | 6.0 | 174 | 0.1872 | 0.6570 | 0.6335 | 0.6450 | 0.9697 | | No log | 7.0 | 203 | 0.2014 | 0.7527 | 0.5578 | 0.6407 | 0.9730 | | No log | 8.0 | 232 | 0.1696 | 0.6706 | 0.6733 | 0.6720 | 0.9727 | | No log | 9.0 | 261 | 0.1743 | 0.6820 | 0.6494 | 0.6653 | 0.9730 | | No log | 10.0 | 290 | 0.1686 | 0.6735 | 0.6574 | 0.6653 | 0.9730 | | No log | 11.0 | 319 | 0.1868 | 0.6934 | 0.5857 | 0.6350 | 0.9712 | | No log | 12.0 | 348 | 0.1930 | 0.7089 | 0.6016 | 0.6509 | 0.9727 | | No log | 13.0 | 377 | 0.1826 | 0.7087 | 0.6494 | 0.6778 | 0.9730 | | No log | 14.0 | 406 | 0.1920 | 0.7103 | 0.6056 | 0.6538 | 0.9722 | | No log | 15.0 | 435 | 0.1848 | 0.6402 | 0.6733 | 0.6563 | 0.9712 | | No log | 16.0 | 464 | 0.1843 | 0.6822 | 0.6414 | 0.6612 | 0.9734 | | No log | 17.0 | 493 | 0.1874 | 0.7009 | 0.6255 | 0.6611 | 0.9730 | | 0.0016 | 18.0 | 522 | 0.1844 | 0.6736 | 0.6494 | 0.6613 | 0.9730 | | 0.0016 | 19.0 | 551 | 0.1850 | 0.7273 | 0.6375 | 0.6794 | 0.9744 | | 0.0016 | 20.0 | 580 | 0.1737 | 0.7179 | 0.6693 | 0.6928 | 0.9749 | | 0.0016 | 21.0 | 609 | 0.1798 | 0.7376 | 0.6494 | 0.6907 | 0.9747 | | 0.0016 | 22.0 | 638 | 0.1797 | 0.7174 | 0.6574 | 0.6861 | 0.9739 | | 0.0016 | 23.0 | 667 | 0.1783 | 0.7046 | 0.6653 | 0.6844 | 0.9742 | | 0.0016 | 24.0 | 696 | 0.1784 | 0.7301 | 0.6574 | 0.6918 | 0.9745 | | 0.0016 | 25.0 | 725 | 0.1818 | 0.7352 | 0.6414 | 0.6851 | 0.9745 | | 0.0016 | 26.0 | 754 | 0.1823 | 0.7419 | 0.6414 | 0.6880 | 0.9745 | | 0.0016 | 27.0 | 783 | 0.1786 | 0.7205 | 0.6574 | 0.6875 | 0.9749 | | 0.0016 | 28.0 | 812 | 0.1781 | 0.7051 | 0.6574 | 0.6804 | 0.9734 | | 0.0016 | 29.0 | 841 | 0.1802 | 0.7181 | 0.6494 | 0.6820 | 0.9744 | | 0.0016 | 30.0 | 870 | 0.1801 | 0.7174 | 0.6574 | 0.6861 | 0.9749 | | 0.0016 | 31.0 | 899 | 0.1824 | 0.7232 | 0.6454 | 0.6821 | 0.9745 | | 0.0016 | 32.0 | 928 | 0.1829 | 0.7232 | 0.6454 | 0.6821 | 0.9744 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Nazzyk/Reinforce-Pixelcopter-PLE-v0
Nazzyk
2023-03-25T19:38:46Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T13:09:00Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 33.50 +/- 26.03 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
mrm8488/mobilebert-uncased-finetuned-squadv1
mrm8488
2023-03-25T19:26:44Z
32
1
transformers
[ "transformers", "pytorch", "safetensors", "mobilebert", "question-answering", "en", "dataset:squad", "arxiv:2004.02984", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en datasets: - squad --- # MobileBERT + SQuAD (v1.1) 📱❓ [mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) fine-tuned on [SQUAD v2.0 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/v2.0/dev/) for **Q&A** downstream task. ## Details of the downstream task (Q&A) - Model 🧠 **MobileBERT** is a thin version of *BERT_LARGE*, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. The checkpoint used here is the original MobileBert Optimized Uncased English: (uncased_L-24_H-128_B-512_A-4_F-4_OPT) checkpoint. More about the model [here](https://arxiv.org/abs/2004.02984) ## Details of the downstream task (Q&A) - Dataset 📚 **S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles. ## Model training 🏋️‍ The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command: ```bash python transformers/examples/question-answering/run_squad.py \ --model_type bert \ --model_name_or_path 'google/mobilebert-uncased' \ --do_eval \ --do_train \ --do_lower_case \ --train_file '/content/dataset/train-v1.1.json' \ --predict_file '/content/dataset/dev-v1.1.json' \ --per_gpu_train_batch_size 16 \ --learning_rate 3e-5 \ --num_train_epochs 5 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir '/content/output' \ --overwrite_output_dir \ --save_steps 1000 ``` It is important to say that this models converges much faster than other ones. So, it is also cheap to fine-tune. ## Test set Results 🧾 | Metric | # Value | | ------ | --------- | | **EM** | **82.33** | | **F1** | **89.64** | | **Size**| **94 MB** | ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline QnA_pipeline = pipeline('question-answering', model='mrm8488/mobilebert-uncased-finetuned-squadv1') QnA_pipeline({ 'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.', 'question': 'Who did identified it ?' }) # Output: {'answer': 'scientists.', 'end': 106, 'score': 0.7885545492172241, 'start': 96} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
Jieming/al_rm_checkpoint
Jieming
2023-03-25T19:22:59Z
0
0
null
[ "pytorch", "generated_from_trainer", "license:mit", "region:us" ]
null
2023-03-25T17:56:39Z
--- license: mit tags: - generated_from_trainer model-index: - name: al_rm_checkpoint results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # al_rm_checkpoint This model is a fine-tuned version of [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
golightly/rl_course_vizdoom_health_gathering_supreme
golightly
2023-03-25T19:11:57Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T19:11:50Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 10.26 +/- 5.00 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r golightly/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
ROGRANMAR/my_awesome_asr_mind_model
ROGRANMAR
2023-03-25T19:09:55Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-03-25T16:27:38Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: my_awesome_asr_mind_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_asr_mind_model This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
HiTZ/alpaca-lora-30b-en-pt-es-ca-eu-gl-at
HiTZ
2023-03-25T18:55:53Z
0
0
null
[ "generated_from_trainer", "dataset:HiTZ/alpaca_mt", "license:other", "region:us" ]
null
2023-03-23T19:51:21Z
--- license: other tags: - generated_from_trainer datasets: - HiTZ/alpaca_mt model-index: - name: alpaca-lora-30b-en-pt-es-ca-eu-gl-at results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # alpaca-lora-30b-en-pt-es-ca-eu-gl-at This model is a fine-tuned version of [decapoda-research/llama-30b-hf](https://huggingface.co/decapoda-research/llama-30b-hf) on the HiTZ/alpaca_mt ['en', 'pt', 'es', 'ca', 'eu', 'gl', 'at'] dataset. It achieves the following results on the evaluation set: - Loss: 0.9088 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 21 - total_train_batch_size: 126 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1695 | 0.04 | 100 | 1.1716 | | 1.1211 | 0.07 | 200 | 1.0964 | | 1.0591 | 0.11 | 300 | 1.0590 | | 1.0234 | 0.14 | 400 | 1.0341 | | 1.0345 | 0.18 | 500 | 1.0165 | | 0.9932 | 0.22 | 600 | 1.0024 | | 0.9948 | 0.25 | 700 | 0.9895 | | 1.01 | 0.29 | 800 | 0.9794 | | 0.9488 | 0.32 | 900 | 0.9708 | | 0.9518 | 0.36 | 1000 | 0.9627 | | 0.9463 | 0.4 | 1100 | 0.9557 | | 0.956 | 0.43 | 1200 | 0.9498 | | 0.9521 | 0.47 | 1300 | 0.9437 | | 0.9345 | 0.51 | 1400 | 0.9385 | | 0.9469 | 0.54 | 1500 | 0.9337 | | 0.9466 | 0.58 | 1600 | 0.9297 | | 0.9403 | 0.61 | 1700 | 0.9257 | | 0.9179 | 0.65 | 1800 | 0.9219 | | 0.9468 | 0.69 | 1900 | 0.9190 | | 0.9173 | 0.72 | 2000 | 0.9163 | | 0.9172 | 0.76 | 2100 | 0.9142 | | 0.9351 | 0.79 | 2200 | 0.9124 | | 0.9238 | 0.83 | 2300 | 0.9110 | | 0.9057 | 0.87 | 2400 | 0.9099 | | 0.9309 | 0.9 | 2500 | 0.9093 | | 0.8893 | 0.94 | 2600 | 0.9090 | | 0.9095 | 0.97 | 2700 | 0.9088 | ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.10.1 - Tokenizers 0.13.2
harshil128/Reinforce-cartpole-V2
harshil128
2023-03-25T18:46:41Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T18:46:36Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole-V2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 451.60 +/- 145.20 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Jieming/rm_checkpoint
Jieming
2023-03-25T18:39:45Z
0
0
null
[ "pytorch", "generated_from_trainer", "license:mit", "region:us" ]
null
2023-03-25T17:56:23Z
--- license: mit tags: - generated_from_trainer model-index: - name: rm_checkpoint results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rm_checkpoint This model is a fine-tuned version of [EleutherAI/gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
harshil128/ppo-LunarLander-v2
harshil128
2023-03-25T18:28:38Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-01-21T21:02:19Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 227.30 +/- 81.70 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
samzoozi/ppo-LunarLander-v2
samzoozi
2023-03-25T18:26:35Z
6
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T18:26:10Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.35 +/- 20.74 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Liborn/Libor
Liborn
2023-03-25T18:26:28Z
0
0
adapter-transformers
[ "adapter-transformers", "climate", "aa", "dataset:fka/awesome-chatgpt-prompts", "arxiv:1910.09700", "license:openrail", "region:us" ]
null
2023-03-25T18:17:47Z
--- license: openrail datasets: - fka/awesome-chatgpt-prompts language: - aa metrics: - accuracy library_name: adapter-transformers tags: - climate --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aimarsg/prueba1
aimarsg
2023-03-25T18:25:04Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-03-25T17:47:11Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: prueba1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # prueba1 This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1842 - Precision: 0.7072 - Recall: 0.6255 - F1: 0.6638 - Accuracy: 0.9724 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 32 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 29 | 0.1520 | 0.5625 | 0.6813 | 0.6162 | 0.9659 | | No log | 2.0 | 58 | 0.1552 | 0.6293 | 0.5817 | 0.6046 | 0.9686 | | No log | 3.0 | 87 | 0.1586 | 0.6667 | 0.5737 | 0.6167 | 0.9709 | | No log | 4.0 | 116 | 0.1595 | 0.6981 | 0.5896 | 0.6393 | 0.9722 | | No log | 5.0 | 145 | 0.1699 | 0.6729 | 0.5737 | 0.6194 | 0.9676 | | No log | 6.0 | 174 | 0.1753 | 0.6577 | 0.5817 | 0.6173 | 0.9689 | | No log | 7.0 | 203 | 0.1665 | 0.6540 | 0.6175 | 0.6352 | 0.9681 | | No log | 8.0 | 232 | 0.1792 | 0.7157 | 0.5618 | 0.6295 | 0.9712 | | No log | 9.0 | 261 | 0.1682 | 0.7048 | 0.5896 | 0.6421 | 0.9714 | | No log | 10.0 | 290 | 0.1732 | 0.7366 | 0.6016 | 0.6623 | 0.9724 | | No log | 11.0 | 319 | 0.1663 | 0.672 | 0.6693 | 0.6707 | 0.9725 | | No log | 12.0 | 348 | 0.1882 | 0.7071 | 0.5578 | 0.6236 | 0.9692 | | No log | 13.0 | 377 | 0.1825 | 0.7103 | 0.6056 | 0.6538 | 0.9710 | | No log | 14.0 | 406 | 0.1755 | 0.7164 | 0.5737 | 0.6372 | 0.9709 | | No log | 15.0 | 435 | 0.1950 | 0.6842 | 0.5697 | 0.6217 | 0.9689 | | No log | 16.0 | 464 | 0.1660 | 0.7240 | 0.6375 | 0.6780 | 0.9727 | | No log | 17.0 | 493 | 0.1833 | 0.7255 | 0.5896 | 0.6505 | 0.9724 | | 0.0061 | 18.0 | 522 | 0.1832 | 0.7190 | 0.6016 | 0.6551 | 0.9702 | | 0.0061 | 19.0 | 551 | 0.1762 | 0.6828 | 0.6175 | 0.6485 | 0.9707 | | 0.0061 | 20.0 | 580 | 0.1785 | 0.7346 | 0.6175 | 0.6710 | 0.9734 | | 0.0061 | 21.0 | 609 | 0.1791 | 0.7093 | 0.6414 | 0.6736 | 0.9739 | | 0.0061 | 22.0 | 638 | 0.1843 | 0.7476 | 0.6255 | 0.6811 | 0.9737 | | 0.0061 | 23.0 | 667 | 0.1837 | 0.7371 | 0.6255 | 0.6767 | 0.9734 | | 0.0061 | 24.0 | 696 | 0.1867 | 0.7176 | 0.6175 | 0.6638 | 0.9715 | | 0.0061 | 25.0 | 725 | 0.1844 | 0.7089 | 0.6016 | 0.6509 | 0.9710 | | 0.0061 | 26.0 | 754 | 0.1815 | 0.7072 | 0.6255 | 0.6638 | 0.9725 | | 0.0061 | 27.0 | 783 | 0.1822 | 0.7021 | 0.6574 | 0.6790 | 0.9737 | | 0.0061 | 28.0 | 812 | 0.1853 | 0.7048 | 0.6375 | 0.6695 | 0.9732 | | 0.0061 | 29.0 | 841 | 0.1845 | 0.7069 | 0.6534 | 0.6791 | 0.9735 | | 0.0061 | 30.0 | 870 | 0.1827 | 0.7004 | 0.6614 | 0.6803 | 0.9735 | | 0.0061 | 31.0 | 899 | 0.1850 | 0.7014 | 0.6175 | 0.6568 | 0.9719 | | 0.0061 | 32.0 | 928 | 0.1842 | 0.7072 | 0.6255 | 0.6638 | 0.9724 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
MarcusAGray/ppo-PyramidsTraining
MarcusAGray
2023-03-25T17:58:57Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-03-25T17:58:52Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: MarcusAGray/ppo-PyramidsTraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
stoked/pulsar
stoked
2023-03-25T17:58:49Z
0
0
asteroid
[ "asteroid", "en", "dataset:fka/awesome-chatgpt-prompts", "license:afl-3.0", "region:us" ]
null
2023-03-25T17:57:53Z
--- license: afl-3.0 datasets: - fka/awesome-chatgpt-prompts language: - en metrics: - code_eval library_name: asteroid ---
cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T17:58:42Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Enduro-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:58:40Z
--- tags: - Enduro-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Enduro-v5 type: Enduro-v5 metrics: - type: mean_reward value: 2317.90 +/- 109.39 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Enduro-v5** This is a trained model of a PPO agent playing Enduro-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Enduro-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Enduro-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Enduro-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T17:55:17Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Frostbite-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:55:15Z
--- tags: - Frostbite-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Frostbite-v5 type: Frostbite-v5 metrics: - type: mean_reward value: 314.00 +/- 18.00 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Frostbite-v5** This is a trained model of a PPO agent playing Frostbite-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Frostbite-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Frostbite-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T17:52:56Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Enduro-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:52:54Z
--- tags: - Enduro-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Enduro-v5 type: Enduro-v5 metrics: - type: mean_reward value: 2344.70 +/- 18.42 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Enduro-v5** This is a trained model of a PPO agent playing Enduro-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Enduro-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Enduro-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Enduro-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/FishingDerby-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T17:52:35Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "FishingDerby-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:52:34Z
--- tags: - FishingDerby-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FishingDerby-v5 type: FishingDerby-v5 metrics: - type: mean_reward value: 27.80 +/- 10.89 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **FishingDerby-v5** This is a trained model of a PPO agent playing FishingDerby-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id FishingDerby-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id FishingDerby-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'FishingDerby-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T17:52:09Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Enduro-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:52:08Z
--- tags: - Enduro-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Enduro-v5 type: Enduro-v5 metrics: - type: mean_reward value: 2241.30 +/- 284.69 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Enduro-v5** This is a trained model of a PPO agent playing Enduro-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Enduro-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Enduro-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Enduro-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Enduro-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/FishingDerby-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T17:51:28Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "FishingDerby-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:51:27Z
--- tags: - FishingDerby-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FishingDerby-v5 type: FishingDerby-v5 metrics: - type: mean_reward value: 27.00 +/- 11.59 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **FishingDerby-v5** This is a trained model of a PPO agent playing FishingDerby-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id FishingDerby-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id FishingDerby-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'FishingDerby-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T17:49:05Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "UpNDown-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:49:04Z
--- tags: - UpNDown-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: UpNDown-v5 type: UpNDown-v5 metrics: - type: mean_reward value: 200052.00 +/- 60214.62 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **UpNDown-v5** This is a trained model of a PPO agent playing UpNDown-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id UpNDown-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id UpNDown-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'UpNDown-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
sagorsarker/emailgenerator
sagorsarker
2023-03-25T17:48:24Z
32
1
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "email-generation", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-29T13:14:08Z
--- language: en tags: - email-generation license: mit --- EmailGenerator is a gpt-2 fine-tuned text-generation pre-trained model trained on [emailblog](https://www.kaggle.com/datasets/mikeschmidtavemac/emailblog) datasets for [EmailWriter](https://github.com/sagorbrur/EmailWriter) repositories. For details about this model check [EmailWriter](https://github.com/sagorbrur/EmailWriter) repository.
cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T17:47:43Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "UpNDown-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:47:41Z
--- tags: - UpNDown-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: UpNDown-v5 type: UpNDown-v5 metrics: - type: mean_reward value: 189488.00 +/- 65579.51 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **UpNDown-v5** This is a trained model of a PPO agent playing UpNDown-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id UpNDown-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id UpNDown-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'UpNDown-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T17:47:40Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "UpNDown-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:47:38Z
--- tags: - UpNDown-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: UpNDown-v5 type: UpNDown-v5 metrics: - type: mean_reward value: 191595.00 +/- 74974.86 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **UpNDown-v5** This is a trained model of a PPO agent playing UpNDown-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id UpNDown-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/UpNDown-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id UpNDown-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'UpNDown-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
526christian/526Mix-less-crunch-test
526christian
2023-03-25T17:47:18Z
6
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "region:us" ]
null
2023-03-25T01:53:37Z
--- license: creativeml-openrail-m ---
butchland/unit8-LunarLander-v2
butchland
2023-03-25T17:46:20Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:46:14Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -138.81 +/- 79.09 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'butchland/unit8-LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
emilianJR/haruna_lora
emilianJR
2023-03-25T17:44:29Z
5
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-03-25T08:14:39Z
--- license: creativeml-openrail-m base_model: andite/anything-v4.0 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # LoRA text2image fine-tuning - https://huggingface.co/kubanemil/haruna_lora These are LoRA adaption weights for https://huggingface.co/kubanemil/haruna_lora. The weights were fine-tuned on the Haruna Sakura's images dataset. You can find some example images in the following.
cleanrl/WizardOfWor-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T17:32:01Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "WizardOfWor-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:31:59Z
--- tags: - WizardOfWor-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: WizardOfWor-v5 type: WizardOfWor-v5 metrics: - type: mean_reward value: 5430.00 +/- 3764.85 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **WizardOfWor-v5** This is a trained model of a PPO agent playing WizardOfWor-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id WizardOfWor-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id WizardOfWor-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'WizardOfWor-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Tennis-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T17:30:41Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Tennis-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:30:40Z
--- tags: - Tennis-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Tennis-v5 type: Tennis-v5 metrics: - type: mean_reward value: 22.00 +/- 1.41 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Tennis-v5** This is a trained model of a PPO agent playing Tennis-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Tennis-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Tennis-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Tennis-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Tennis-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Tennis-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Tennis-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Tennis-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T17:30:23Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Tennis-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:30:21Z
--- tags: - Tennis-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Tennis-v5 type: Tennis-v5 metrics: - type: mean_reward value: 22.60 +/- 1.11 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Tennis-v5** This is a trained model of a PPO agent playing Tennis-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Tennis-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Tennis-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Tennis-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Tennis-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Tennis-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Tennis-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Venture-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T17:29:55Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Venture-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:29:54Z
--- tags: - Venture-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Venture-v5 type: Venture-v5 metrics: - type: mean_reward value: 0.00 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Venture-v5** This is a trained model of a PPO agent playing Venture-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Venture-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Venture-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Venture-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Venture-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Venture-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Venture-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Venture-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T17:28:24Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Venture-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:28:22Z
--- tags: - Venture-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Venture-v5 type: Venture-v5 metrics: - type: mean_reward value: 0.00 +/- 0.00 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Venture-v5** This is a trained model of a PPO agent playing Venture-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Venture-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Venture-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Venture-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Venture-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Venture-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Venture-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
dmenini/ppo-Pyramids
dmenini
2023-03-25T17:27:17Z
10
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-03-25T16:34:40Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: dmenini/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
artbreguez/q-CartPole-v1
artbreguez
2023-03-25T17:26:28Z
0
0
null
[ "CartPole-v1", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:08:10Z
--- tags: - CartPole-v1 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Q-Cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 94.79 +/- 12.53 name: mean_reward verified: false --- # **Q-Learning** Agent playing **CartPole-v1** This is a trained model of a **Q-Learning** agent playing **CartPole-v1** . ## Usage ```python model = load_from_hub(repo_id="artbreguez/Q-Cartpole-v1", filename="q-learning.pkl") env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T17:23:29Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Surround-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:23:27Z
--- tags: - Surround-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Surround-v5 type: Surround-v5 metrics: - type: mean_reward value: 6.30 +/- 2.00 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Surround-v5** This is a trained model of a PPO agent playing Surround-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Surround-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Surround-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Surround-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T17:23:03Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Surround-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:23:02Z
--- tags: - Surround-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Surround-v5 type: Surround-v5 metrics: - type: mean_reward value: 5.30 +/- 1.35 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Surround-v5** This is a trained model of a PPO agent playing Surround-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Surround-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Surround-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Surround-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
LarryAIDraw/tentenCharacterLohaFullckpt_loha
LarryAIDraw
2023-03-25T17:22:23Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-03-25T13:05:25Z
--- license: creativeml-openrail-m --- https://civitai.com/models/21305/tenten-character-lohafullckpt
cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T17:22:19Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Surround-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:22:17Z
--- tags: - Surround-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Surround-v5 type: Surround-v5 metrics: - type: mean_reward value: 5.30 +/- 2.69 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Surround-v5** This is a trained model of a PPO agent playing Surround-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Surround-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Surround-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Surround-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
LarryAIDraw/teaAnzuDsodVerYuGiOh_anzudsodv1
LarryAIDraw
2023-03-25T17:22:11Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-03-25T13:07:16Z
--- license: creativeml-openrail-m --- https://civitai.com/models/18612/teaanzu-dsod-ver-or-yu-gi-oh
mlewand/rl_course_vizdoom_health_gathering_supreme
mlewand
2023-03-25T17:17:19Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-26T16:20:32Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 9.99 +/- 4.96 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r mlewand/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
cleanrl/SpaceInvaders-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T17:16:50Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "SpaceInvaders-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:16:49Z
--- tags: - SpaceInvaders-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvaders-v5 type: SpaceInvaders-v5 metrics: - type: mean_reward value: 7318.50 +/- 6248.69 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **SpaceInvaders-v5** This is a trained model of a PPO agent playing SpaceInvaders-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id SpaceInvaders-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id SpaceInvaders-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'SpaceInvaders-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/StarGunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T17:16:30Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "StarGunner-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:16:28Z
--- tags: - StarGunner-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: StarGunner-v5 type: StarGunner-v5 metrics: - type: mean_reward value: 66420.00 +/- 7673.43 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **StarGunner-v5** This is a trained model of a PPO agent playing StarGunner-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id StarGunner-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/StarGunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/StarGunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/StarGunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id StarGunner-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'StarGunner-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
droid22/poca-SoccerTwos
droid22
2023-03-25T17:15:41Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-03-25T17:15:35Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: droid22/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
cleanrl/StarGunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T17:14:09Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "StarGunner-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:14:07Z
--- tags: - StarGunner-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: StarGunner-v5 type: StarGunner-v5 metrics: - type: mean_reward value: 69590.00 +/- 6098.60 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **StarGunner-v5** This is a trained model of a PPO agent playing StarGunner-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id StarGunner-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/StarGunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/StarGunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/StarGunner-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id StarGunner-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'StarGunner-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Solaris-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T17:08:03Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Solaris-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:08:01Z
--- tags: - Solaris-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Solaris-v5 type: Solaris-v5 metrics: - type: mean_reward value: 2348.00 +/- 645.24 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Solaris-v5** This is a trained model of a PPO agent playing Solaris-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Solaris-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Solaris-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Solaris-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Solaris-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Solaris-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Solaris-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Solaris-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T17:07:54Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Solaris-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:07:53Z
--- tags: - Solaris-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Solaris-v5 type: Solaris-v5 metrics: - type: mean_reward value: 2068.00 +/- 1014.94 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Solaris-v5** This is a trained model of a PPO agent playing Solaris-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Solaris-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Solaris-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Solaris-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Solaris-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Solaris-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Solaris-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/NameThisGame-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T17:05:08Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "NameThisGame-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:05:06Z
--- tags: - NameThisGame-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: NameThisGame-v5 type: NameThisGame-v5 metrics: - type: mean_reward value: 11098.00 +/- 1705.77 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **NameThisGame-v5** This is a trained model of a PPO agent playing NameThisGame-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id NameThisGame-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/NameThisGame-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/NameThisGame-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/NameThisGame-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id NameThisGame-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'NameThisGame-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T17:04:27Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "KungFuMaster-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:04:25Z
--- tags: - KungFuMaster-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: KungFuMaster-v5 type: KungFuMaster-v5 metrics: - type: mean_reward value: 29710.00 +/- 7182.96 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **KungFuMaster-v5** This is a trained model of a PPO agent playing KungFuMaster-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id KungFuMaster-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id KungFuMaster-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'KungFuMaster-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Krull-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T17:04:05Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Krull-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:04:04Z
--- tags: - Krull-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Krull-v5 type: Krull-v5 metrics: - type: mean_reward value: 7596.00 +/- 1556.09 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Krull-v5** This is a trained model of a PPO agent playing Krull-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Krull-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Krull-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Krull-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Krull-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Krull-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Krull-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T17:04:05Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "KungFuMaster-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:04:03Z
--- tags: - KungFuMaster-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: KungFuMaster-v5 type: KungFuMaster-v5 metrics: - type: mean_reward value: 25720.00 +/- 5122.27 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **KungFuMaster-v5** This is a trained model of a PPO agent playing KungFuMaster-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id KungFuMaster-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id KungFuMaster-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'KungFuMaster-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Krull-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T17:04:04Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Krull-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:04:03Z
--- tags: - Krull-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Krull-v5 type: Krull-v5 metrics: - type: mean_reward value: 7096.00 +/- 1767.62 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Krull-v5** This is a trained model of a PPO agent playing Krull-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Krull-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Krull-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Krull-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Krull-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Krull-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Krull-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T17:04:04Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "KungFuMaster-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:04:02Z
--- tags: - KungFuMaster-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: KungFuMaster-v5 type: KungFuMaster-v5 metrics: - type: mean_reward value: 19080.00 +/- 6065.28 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **KungFuMaster-v5** This is a trained model of a PPO agent playing KungFuMaster-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id KungFuMaster-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id KungFuMaster-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'KungFuMaster-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Krull-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T17:02:53Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Krull-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:02:52Z
--- tags: - Krull-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Krull-v5 type: Krull-v5 metrics: - type: mean_reward value: 7739.00 +/- 993.06 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Krull-v5** This is a trained model of a PPO agent playing Krull-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Krull-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Krull-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Krull-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Krull-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Krull-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Krull-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
MarcusAGray/ppo-SnowballTarget
MarcusAGray
2023-03-25T17:02:18Z
12
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-03-25T17:02:13Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: MarcusAGray/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
cleanrl/MsPacman-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T17:01:09Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "MsPacman-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T17:01:08Z
--- tags: - MsPacman-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: MsPacman-v5 type: MsPacman-v5 metrics: - type: mean_reward value: 2760.00 +/- 950.80 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **MsPacman-v5** This is a trained model of a PPO agent playing MsPacman-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id MsPacman-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id MsPacman-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'MsPacman-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Kangaroo-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T16:59:27Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Kangaroo-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:59:26Z
--- tags: - Kangaroo-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Kangaroo-v5 type: Kangaroo-v5 metrics: - type: mean_reward value: 20.00 +/- 60.00 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Kangaroo-v5** This is a trained model of a PPO agent playing Kangaroo-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Kangaroo-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Kangaroo-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Kangaroo-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Kangaroo-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T16:59:15Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Kangaroo-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:59:13Z
--- tags: - Kangaroo-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Kangaroo-v5 type: Kangaroo-v5 metrics: - type: mean_reward value: 1600.00 +/- 282.84 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Kangaroo-v5** This is a trained model of a PPO agent playing Kangaroo-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Kangaroo-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Kangaroo-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Kangaroo-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T16:54:41Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Jamesbond-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:54:40Z
--- tags: - Jamesbond-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Jamesbond-v5 type: Jamesbond-v5 metrics: - type: mean_reward value: 465.00 +/- 128.55 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Jamesbond-v5** This is a trained model of a PPO agent playing Jamesbond-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Jamesbond-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Jamesbond-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Jamesbond-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T16:50:35Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Jamesbond-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:50:33Z
--- tags: - Jamesbond-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Jamesbond-v5 type: Jamesbond-v5 metrics: - type: mean_reward value: 490.00 +/- 124.10 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Jamesbond-v5** This is a trained model of a PPO agent playing Jamesbond-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Jamesbond-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Jamesbond-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Jamesbond-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
vocabtrimmer/mt5-small-trimmed-fr-60000-frquad-qa
vocabtrimmer
2023-03-25T16:44:14Z
105
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "question answering", "fr", "dataset:lmqg/qg_frquad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-03-19T16:55:12Z
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: fr datasets: - lmqg/qg_frquad pipeline_tag: text2text-generation tags: - question answering widget: - text: "question: En quelle année a-t-on trouvé trace d'un haut fourneau similaire?, context: Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu." example_title: "Question Answering Example 1" - text: "question: Comment appelle-t-on la Guerre de 14-18 ?, context: Ce black dog peut être lié à des évènements traumatisants issus du monde extérieur, tels que son renvoi de l'Amirauté après la catastrophe des Dardanelles, lors de la Grande Guerre de 14-18, ou son rejet par l'électorat en juillet 1945. On sait également que dans ces deux cas, la guérison, certes lente et douloureuse et jamais complète ni définitive, se fera grâce à la peinture. D'un autre côté, étant donnés les symptômes de ce mal que Churchill éprouvait de plus en plus, il ne pouvait rien moins qu'être purement associé à de telles causes extrinsèques, ce qui correspond au profil classique de la dépression majeure unipolaire ou bipolaire." example_title: "Question Answering Example 2" model-index: - name: vocabtrimmer/mt5-small-trimmed-fr-60000-frquad-qa results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_frquad type: default args: default metrics: - name: BLEU4 (Question Answering) type: bleu4_question_answering value: 10.43 - name: ROUGE-L (Question Answering) type: rouge_l_question_answering value: 22.59 - name: METEOR (Question Answering) type: meteor_question_answering value: 17.44 - name: BERTScore (Question Answering) type: bertscore_question_answering value: 86.74 - name: MoverScore (Question Answering) type: moverscore_question_answering value: 66.71 - name: AnswerF1Score (Question Answering) type: answer_f1_score__question_answering value: 34.34 - name: AnswerExactMatch (Question Answering) type: answer_exact_match_question_answering value: 20.01 --- # Model Card of `vocabtrimmer/mt5-small-trimmed-fr-60000-frquad-qa` This model is fine-tuned version of [ckpts/mt5-small-trimmed-fr-60000](https://huggingface.co/ckpts/mt5-small-trimmed-fr-60000) for question answering task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [ckpts/mt5-small-trimmed-fr-60000](https://huggingface.co/ckpts/mt5-small-trimmed-fr-60000) - **Language:** fr - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="fr", model="vocabtrimmer/mt5-small-trimmed-fr-60000-frquad-qa") # model prediction answers = model.answer_q(list_question="En quelle année a-t-on trouvé trace d'un haut fourneau similaire?", list_context=" Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-fr-60000-frquad-qa") output = pipe("question: En quelle année a-t-on trouvé trace d'un haut fourneau similaire?, context: Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.") ``` ## Evaluation - ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-60000-frquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 20.01 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | AnswerF1Score | 34.34 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | BERTScore | 86.74 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_1 | 17.96 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_2 | 14.51 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_3 | 12.22 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_4 | 10.43 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | METEOR | 17.44 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | MoverScore | 66.71 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | ROUGE_L | 22.59 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_frquad - dataset_name: default - input_types: ['paragraph_question'] - output_types: ['answer'] - prefix_types: None - model: ckpts/mt5-small-trimmed-fr-60000 - max_length: 512 - max_length_output: 32 - epoch: 24 - batch: 32 - lr: 0.0005 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-60000-frquad-qa/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
cleanrl/DemonAttack-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T16:27:22Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "DemonAttack-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:27:20Z
--- tags: - DemonAttack-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: DemonAttack-v5 type: DemonAttack-v5 metrics: - type: mean_reward value: 105099.50 +/- 31017.43 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **DemonAttack-v5** This is a trained model of a PPO agent playing DemonAttack-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id DemonAttack-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id DemonAttack-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'DemonAttack-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/DemonAttack-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T16:26:41Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "DemonAttack-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:26:39Z
--- tags: - DemonAttack-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: DemonAttack-v5 type: DemonAttack-v5 metrics: - type: mean_reward value: 88149.00 +/- 42555.30 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **DemonAttack-v5** This is a trained model of a PPO agent playing DemonAttack-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id DemonAttack-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id DemonAttack-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'DemonAttack-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/BeamRider-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T16:22:51Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "BeamRider-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:22:49Z
--- tags: - BeamRider-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: BeamRider-v5 type: BeamRider-v5 metrics: - type: mean_reward value: 5486.40 +/- 3101.98 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **BeamRider-v5** This is a trained model of a PPO agent playing BeamRider-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id BeamRider-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/BeamRider-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/BeamRider-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/BeamRider-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id BeamRider-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'BeamRider-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/BeamRider-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T16:22:31Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "BeamRider-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:22:29Z
--- tags: - BeamRider-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: BeamRider-v5 type: BeamRider-v5 metrics: - type: mean_reward value: 4463.00 +/- 1967.26 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **BeamRider-v5** This is a trained model of a PPO agent playing BeamRider-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id BeamRider-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/BeamRider-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/BeamRider-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/BeamRider-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id BeamRider-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'BeamRider-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Centipede-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T16:21:13Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Centipede-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:21:12Z
--- tags: - Centipede-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Centipede-v5 type: Centipede-v5 metrics: - type: mean_reward value: 2054.30 +/- 809.44 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Centipede-v5** This is a trained model of a PPO agent playing Centipede-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Centipede-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Centipede-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Centipede-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Centipede-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T16:19:47Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Berzerk-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:19:46Z
--- tags: - Berzerk-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Berzerk-v5 type: Berzerk-v5 metrics: - type: mean_reward value: 503.00 +/- 76.16 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Berzerk-v5** This is a trained model of a PPO agent playing Berzerk-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Berzerk-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Berzerk-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T16:19:41Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Berzerk-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:19:39Z
--- tags: - Berzerk-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Berzerk-v5 type: Berzerk-v5 metrics: - type: mean_reward value: 518.00 +/- 109.34 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Berzerk-v5** This is a trained model of a PPO agent playing Berzerk-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Berzerk-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Berzerk-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T16:19:30Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Berzerk-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:19:29Z
--- tags: - Berzerk-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Berzerk-v5 type: Berzerk-v5 metrics: - type: mean_reward value: 541.00 +/- 126.37 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Berzerk-v5** This is a trained model of a PPO agent playing Berzerk-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Berzerk-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Berzerk-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/CrazyClimber-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T16:19:22Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "CrazyClimber-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:19:21Z
--- tags: - CrazyClimber-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CrazyClimber-v5 type: CrazyClimber-v5 metrics: - type: mean_reward value: 110540.00 +/- 10599.08 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **CrazyClimber-v5** This is a trained model of a PPO agent playing CrazyClimber-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id CrazyClimber-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id CrazyClimber-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'CrazyClimber-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Defender-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T16:18:57Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Defender-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:18:52Z
--- tags: - Defender-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Defender-v5 type: Defender-v5 metrics: - type: mean_reward value: 55745.00 +/- 14538.10 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Defender-v5** This is a trained model of a PPO agent playing Defender-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Defender-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Defender-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Defender-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Defender-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Defender-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Defender-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/CrazyClimber-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T16:18:47Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "CrazyClimber-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:18:45Z
--- tags: - CrazyClimber-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CrazyClimber-v5 type: CrazyClimber-v5 metrics: - type: mean_reward value: 90690.00 +/- 18685.85 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **CrazyClimber-v5** This is a trained model of a PPO agent playing CrazyClimber-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id CrazyClimber-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id CrazyClimber-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'CrazyClimber-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Defender-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T16:18:33Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Defender-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:18:32Z
--- tags: - Defender-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Defender-v5 type: Defender-v5 metrics: - type: mean_reward value: 46740.00 +/- 14657.79 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Defender-v5** This is a trained model of a PPO agent playing Defender-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --env-id Defender-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Defender-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Defender-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Defender-v5-cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_machado_atari_wrapper.py --exp-name cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Defender-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Defender-v5', 'exp_name': 'cleanba_impala_envpool_machado_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
Shivraj8615/Reinforce-Pixelcopter
Shivraj8615
2023-03-25T16:08:11Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T16:08:05Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 59.10 +/- 36.98 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
202k/10-5-5
202k
2023-03-25T16:04:35Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-25T15:23:29Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: 10-5-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 10-5-5 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5508 - Accuracy: 0.7273 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.5508 | 0.7273 | | No log | 2.0 | 2 | 0.5481 | 0.7273 | | No log | 3.0 | 3 | 0.5427 | 0.7273 | | No log | 4.0 | 4 | 0.5439 | 0.7273 | | No log | 5.0 | 5 | 0.5448 | 0.7273 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
droid22/ppo-Pyramids
droid22
2023-03-25T16:01:06Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-03-25T16:01:01Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: droid22/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
202k/10-1-9
202k
2023-03-25T15:45:56Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-25T15:13:43Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: 10-1-9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 10-1-9 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6288 - Accuracy: 0.6111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.6288 | 0.6111 | | No log | 2.0 | 2 | 0.6255 | 0.5556 | | No log | 3.0 | 3 | 0.6228 | 0.6111 | | No log | 4.0 | 4 | 0.6212 | 0.6111 | | No log | 5.0 | 5 | 0.6207 | 0.6111 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
cleanrl/Seaquest-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T15:44:41Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Seaquest-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T15:44:39Z
--- tags: - Seaquest-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Seaquest-v5 type: Seaquest-v5 metrics: - type: mean_reward value: 1760.00 +/- 15.49 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Seaquest-v5** This is a trained model of a PPO agent playing Seaquest-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Seaquest-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Seaquest-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Seaquest-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Seaquest-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Seaquest-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Seaquest-v5', 'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Seaquest-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T15:44:32Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Seaquest-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T15:44:31Z
--- tags: - Seaquest-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Seaquest-v5 type: Seaquest-v5 metrics: - type: mean_reward value: 1770.00 +/- 16.12 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Seaquest-v5** This is a trained model of a PPO agent playing Seaquest-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Seaquest-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Seaquest-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Seaquest-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Seaquest-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Seaquest-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Seaquest-v5', 'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
pszemraj/flan-t5-small-instructiongen
pszemraj
2023-03-25T15:44:03Z
22
0
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "instructiongen", "self-instruct", "instruction generation", "dataset:pszemraj/fleece2instructions", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-03-20T02:25:50Z
--- license: apache-2.0 tags: - generated_from_trainer - instructiongen - self-instruct - instruction generation datasets: - pszemraj/fleece2instructions metrics: - rouge model-index: - name: flan-t5-small-instructiongen results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: pszemraj/fleece2instructions type: pszemraj/fleece2instructions split: validation metrics: - name: Rouge1 type: rouge value: 52.201 widget: - text: >- You'll need to start by choosing the right venue. Consider the type of atmosphere and the size of the area that will be suitable for the number of guests you plan to invite. Choose the right decorations based on your brother's interests, such as balloons in his favorite colors, banners, and streamers. Next, decide on the food and drinks, making sure they are tasty and appropriate for the occasion. Then decide on the other games, music, and entertainment that will make the party memorable. Finally, involve your brother's friends and family to help create the perfect surprise. example_title: birthday party - text: 1) cookies and cream 2) chocolate chip 3) mint chip 4) oreo example_title: ice cream - text: >- Start by selecting a scale model of a building that fits the theme. Use a hobby knife and glue to cut and assemble the model into a ruined or abandoned version of itself, adding details like broken windows and graffiti. Create a base for the diorama using foam, plaster, or other materials, and paint it to resemble a ruined street or sidewalk. Add miniature vehicles, debris, and figures to complete the scene, and use weathering techniques like dry brushing and rust washes to add realism. Display the diorama in a shadow box or other protective case to showcase your work. example_title: Miniature diorama creation - text: >- Start by selecting clothing that is futuristic and edgy, such as leather jackets, neon-colored accessories, and tech-inspired patterns. Add accessories like goggles, cybernetic implants, and LED lights to enhance the cyberpunk vibe. Use makeup and body paint to create a futuristic look, such as metallic skin or neon makeup. Consider adding functional elements to your costume, such as a built-in backpack or hidden pockets for your tech gadgets. Finally, practice your confident walk and embrace your inner cyberpunk for a memorable and immersive costume experience. example_title: Cyberpunk costume design - text: >- Start by creating a base terrain with mountains, valleys, and other natural features. Use fractal noise and displacement mapping to add texture and detail to the terrain, and experiment with different materials like rock, grass, and water. Add surreal elements like floating islands, giant mushrooms, or impossible geometry to create a dreamlike atmosphere. Use lighting and color grading to enhance the mood and tone of the scene, and render the final image at a high resolution for maximum impact. Share your surreal landscape with the world and inspire others to explore the possibilities of 3D art. example_title: Surreal 3D landscape creation - text: >- Start by setting a realistic goal and creating a training plan. Build up your mileage gradually over time, and incorporate cross-training and strength exercises to prevent injury and improve endurance. Be sure to stay hydrated and properly fuel your body with nutritious foods. Listen to your body and adjust your training as needed to avoid overexertion or burnout. Finally, taper your training in the weeks leading up to the race to give your body time to rest and recover before the big day. example_title: Marathon training pipeline_tag: text2text-generation --- # flan-t5-small-instructiongen Instead of generating questions from text, generate instructions for LLMs! This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3401 - Rouge1: 52.201 - Rouge2: 35.6154 - Rougel: 50.2334 - Rougelsum: 50.338 - Gen Len: 14.0450 ## Intended uses & limitations This is just a **small** model/example. There is likely to be even better performance with larger models (ex [pszemraj/bart-base-instructiongen)](https://huggingface.co/pszemraj/bart-base-instructiongen) generalizes better) Additionally, this was trained on a dataset of **only** instructions+outputs, with the `inputs` filtered out. This means that text of *1) cookies and cream 2) chocolate chip 3) mint chip 4) oreo* will **not** get you *"Rank the following ice cream flavors: oreo, mint chip, chocolate chip, cookies and cream"*. ## Training and evaluation data See the linked dataset `pszemraj/fleece2instructions` - it is a filtered/formatted version of `tatsu-lab/alpaca` to generate instructions for arbitrary text. - Some of the API examples are intentionally weird to demonstrate the generalizability of the model. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.02 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.6161 | 1.0 | 181 | 1.3714 | 51.1003 | 34.5701 | 49.1277 | 49.2466 | 13.8357 | | 1.539 | 2.0 | 362 | 1.3401 | 52.201 | 35.6154 | 50.2334 | 50.338 | 14.0450 |
israel-avihail/ppo-SnowballTarget
israel-avihail
2023-03-25T15:43:26Z
12
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-03-25T15:43:21Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: israel-avihail/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Akashpb13/Kabyle_xlsr
Akashpb13
2023-03-25T15:40:54Z
26
2
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sw", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "kab", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - kab license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - sw - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: Akashpb13/Kabyle_xlsr results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: kab metrics: - name: Test WER type: wer value: 0.3188425282720088 - name: Test CER type: cer value: 0.09443079928558358 --- # Akashpb13/Kabyle_xlsr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets): - Loss: 0.159032 - Wer: 0.187934 ## Model description "facebook/wav2vec2-xls-r-300m" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Kabyle train.tsv. Only 50,000 records were sampled randomly and trained due to huge size of dataset. Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0 ## Training procedure For creating the training dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000096 - train_batch_size: 8 - seed: 13 - gradient_accumulation_steps: 4 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |-------|---------------|-----------------|----------| | 500 | 7.199800 | 3.130564 | 1.000000 | | 1000 | 1.570200 | 0.718097 | 0.734682 | | 1500 | 0.850800 | 0.524227 | 0.640532 | | 2000 | 0.712200 | 0.468694 | 0.603454 | | 2500 | 0.651200 | 0.413833 | 0.573025 | | 3000 | 0.603100 | 0.403680 | 0.552847 | | 3500 | 0.553300 | 0.372638 | 0.541719 | | 4000 | 0.537200 | 0.353759 | 0.531191 | | 4500 | 0.506300 | 0.359109 | 0.519601 | | 5000 | 0.479600 | 0.343937 | 0.511336 | | 5500 | 0.479800 | 0.338214 | 0.503948 | | 6000 | 0.449500 | 0.332600 | 0.495221 | | 6500 | 0.439200 | 0.323905 | 0.492635 | | 7000 | 0.434900 | 0.310417 | 0.484555 | | 7500 | 0.403200 | 0.311247 | 0.483262 | | 8000 | 0.401500 | 0.295637 | 0.476566 | | 8500 | 0.397000 | 0.301321 | 0.471672 | | 9000 | 0.371600 | 0.295639 | 0.468440 | | 9500 | 0.370700 | 0.294039 | 0.468902 | | 10000 | 0.364900 | 0.291195 | 0.468440 | | 10500 | 0.348300 | 0.284898 | 0.461098 | | 11000 | 0.350100 | 0.281764 | 0.459805 | | 11500 | 0.336900 | 0.291022 | 0.461606 | | 12000 | 0.330700 | 0.280467 | 0.455234 | | 12500 | 0.322500 | 0.271714 | 0.452694 | | 13000 | 0.307400 | 0.289519 | 0.455465 | | 13500 | 0.309300 | 0.281922 | 0.451217 | | 14000 | 0.304800 | 0.271514 | 0.452186 | | 14500 | 0.288100 | 0.286801 | 0.446830 | | 15000 | 0.293200 | 0.276309 | 0.445399 | | 15500 | 0.289800 | 0.287188 | 0.446230 | | 16000 | 0.274800 | 0.286406 | 0.441243 | | 16500 | 0.271700 | 0.284754 | 0.441520 | | 17000 | 0.262500 | 0.275431 | 0.442167 | | 17500 | 0.255500 | 0.276575 | 0.439858 | | 18000 | 0.260200 | 0.269911 | 0.435425 | | 18500 | 0.250600 | 0.270519 | 0.434686 | | 19000 | 0.243300 | 0.267655 | 0.437826 | | 19500 | 0.240600 | 0.277109 | 0.431731 | | 20000 | 0.237200 | 0.266622 | 0.433994 | | 20500 | 0.231300 | 0.273015 | 0.428868 | | 21000 | 0.227200 | 0.263024 | 0.430161 | | 21500 | 0.220400 | 0.272880 | 0.429607 | | 22000 | 0.218600 | 0.272340 | 0.426883 | | 22500 | 0.213100 | 0.277066 | 0.428407 | | 23000 | 0.205000 | 0.278404 | 0.424020 | | 23500 | 0.200900 | 0.270877 | 0.418987 | | 24000 | 0.199000 | 0.289120 | 0.425821 | | 24500 | 0.196100 | 0.275831 | 0.424066 | | 25000 | 0.191100 | 0.282822 | 0.421850 | | 25500 | 0.190100 | 0.275820 | 0.418248 | | 26000 | 0.178800 | 0.279208 | 0.419125 | | 26500 | 0.183100 | 0.271464 | 0.419218 | | 27000 | 0.177400 | 0.280869 | 0.419680 | | 27500 | 0.171800 | 0.279593 | 0.414924 | | 28000 | 0.172900 | 0.276949 | 0.417648 | | 28500 | 0.164900 | 0.283491 | 0.417786 | | 29000 | 0.164800 | 0.283122 | 0.416078 | | 29500 | 0.165500 | 0.281969 | 0.415801 | | 30000 | 0.163800 | 0.283319 | 0.412753 | | 30500 | 0.153500 | 0.285702 | 0.414046 | | 31000 | 0.156500 | 0.285041 | 0.412615 | | 31500 | 0.150900 | 0.284336 | 0.413723 | | 32000 | 0.151800 | 0.285922 | 0.412292 | | 32500 | 0.149200 | 0.289461 | 0.412153 | | 33000 | 0.145400 | 0.291322 | 0.409567 | | 33500 | 0.145600 | 0.294361 | 0.409614 | | 34000 | 0.144200 | 0.290686 | 0.409059 | | 34500 | 0.143400 | 0.289474 | 0.409844 | | 35000 | 0.143500 | 0.290340 | 0.408367 | | 35500 | 0.143200 | 0.289581 | 0.407351 | | 36000 | 0.138400 | 0.292782 | 0.408736 | | 36500 | 0.137900 | 0.289108 | 0.408044 | | 37000 | 0.138200 | 0.292127 | 0.407166 | | 37500 | 0.134600 | 0.291797 | 0.408413 | | 38000 | 0.139800 | 0.290056 | 0.408090 | | 38500 | 0.136500 | 0.291198 | 0.408090 | | 39000 | 0.137700 | 0.289696 | 0.408044 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id Akashpb13/Kabyle_xlsr --dataset mozilla-foundation/common_voice_8_0 --config kab --split test ```
Akashpb13/xlsr_kurmanji_kurdish
Akashpb13
2023-03-25T15:40:45Z
45
12
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "kmr", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "ku", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - kmr - ku license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer - kmr - robust-speech-event - model_for_talk - hf-asr-leaderboard datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: Akashpb13/xlsr_kurmanji_kurdish results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: kmr metrics: - name: Test WER type: wer value: 0.33073206986250464 - name: Test CER type: cer value: 0.08035244447163924 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: kmr metrics: - name: Test WER type: wer value: 0.33073206986250464 - name: Test CER type: cer value: 0.08035244447163924 --- # Akashpb13/xlsr_kurmanji_kurdish This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets): - Loss: 0.292389 - Wer: 0.388585 ## Model description "facebook/wav2vec2-xls-r-300m" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Kurmanji Kurdish train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0 ## Training procedure For creating the training dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000096 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 16 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 200 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |------|---------------|-----------------|----------| | 200 | 4.382500 | 3.183725 | 1.000000 | | 400 | 2.870200 | 0.996664 | 0.781117 | | 600 | 0.609900 | 0.333755 | 0.445052 | | 800 | 0.326800 | 0.305729 | 0.403157 | | 1000 | 0.255000 | 0.290734 | 0.391621 | | 1200 | 0.226300 | 0.292389 | 0.388585 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.18.1 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id Akashpb13/xlsr_kurmanji_kurdish --dataset mozilla-foundation/common_voice_8_0 --config kmr --split test ```
Sosaka/LLaMa-7B-ggml-4bit-OLD
Sosaka
2023-03-25T15:35:56Z
0
2
null
[ "license:other", "region:us" ]
null
2023-03-25T14:15:14Z
--- license: other --- !!!This is just repost of https://huggingface.co/hlhr202/llama-7B-ggml-int4 to store it with executable in one repo, so go to the original repo and give him a like !!!Model in this repo is incompartible with new llama-cpp, use versions above 20-03-2023
cleanrl/Phoenix-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T15:30:10Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Phoenix-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T15:30:08Z
--- tags: - Phoenix-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Phoenix-v5 type: Phoenix-v5 metrics: - type: mean_reward value: 87729.00 +/- 33838.88 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Phoenix-v5** This is a trained model of a PPO agent playing Phoenix-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Phoenix-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Phoenix-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Phoenix-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Phoenix-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Phoenix-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Phoenix-v5', 'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/MsPacman-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3
cleanrl
2023-03-25T15:27:25Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "MsPacman-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T15:27:23Z
--- tags: - MsPacman-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: MsPacman-v5 type: MsPacman-v5 metrics: - type: mean_reward value: 3826.00 +/- 1373.03 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **MsPacman-v5** This is a trained model of a PPO agent playing MsPacman-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id MsPacman-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id MsPacman-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'MsPacman-v5', 'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/Jamesbond-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T15:26:36Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "Jamesbond-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T15:26:35Z
--- tags: - Jamesbond-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Jamesbond-v5 type: Jamesbond-v5 metrics: - type: mean_reward value: 1595.00 +/- 1566.44 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Jamesbond-v5** This is a trained model of a PPO agent playing Jamesbond-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Jamesbond-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Jamesbond-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Jamesbond-v5', 'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
cleanrl/MsPacman-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1
cleanrl
2023-03-25T15:26:07Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "MsPacman-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T15:26:05Z
--- tags: - MsPacman-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: MsPacman-v5 type: MsPacman-v5 metrics: - type: mean_reward value: 4207.00 +/- 1438.62 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **MsPacman-v5** This is a trained model of a PPO agent playing MsPacman-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id MsPacman-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed1/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id MsPacman-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'MsPacman-v5', 'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```
droid22/ppo-SnowballTarget
droid22
2023-03-25T15:21:42Z
12
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-03-25T15:21:37Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: droid22/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ys7yoo/kosbert_sts
ys7yoo
2023-03-25T15:16:50Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-03-25T15:13:44Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 45 with parameters: ``` {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 23, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
zaydzuhri/flan-t5-small-tldr-50k
zaydzuhri
2023-03-25T15:15:28Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-03-25T10:33:27Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: flan-t5-small-tldr-50k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-small-tldr-50k This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the Reddit TL;DR dataset (https://zenodo.org/record/1168855#.ZB8P-iFByUk). It achieves the following results on the evaluation set: - Gen Len: 16.4422 - Loss: 3.2423 - Rouge1: 14.7049 - Rouge2: 3.2396 - Rougel: 12.5104 - Rougelsum: 12.9681 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Gen Len | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:-------:|:---------------:|:-------:|:------:|:-------:|:---------:| | 3.5507 | 1.0 | 5625 | 16.1424 | 3.2752 | 14.2302 | 2.9853 | 12.1734 | 12.5894 | | 3.4842 | 2.0 | 11250 | 16.1126 | 3.2569 | 14.3966 | 3.0939 | 12.2437 | 12.6705 | | 3.4288 | 3.0 | 16875 | 16.39 | 3.2481 | 14.6879 | 3.2647 | 12.5199 | 12.9681 | | 3.4176 | 4.0 | 22500 | 16.2948 | 3.2432 | 14.7198 | 3.2693 | 12.5436 | 12.9885 | | 3.4033 | 5.0 | 28125 | 16.4422 | 3.2423 | 14.7049 | 3.2396 | 12.5104 | 12.9681 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1 - Datasets 2.10.1 - Tokenizers 0.13.2
danzz06/my_awesome_qa_model
danzz06
2023-03-25T15:05:32Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2023-03-23T16:27:44Z
--- license: cc-by-4.0 tags: - generated_from_trainer datasets: - squad model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.6395 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 0.5410 | | 0.4883 | 2.0 | 500 | 0.5437 | | 0.4883 | 3.0 | 750 | 0.5913 | | 0.226 | 4.0 | 1000 | 0.7683 | | 0.226 | 5.0 | 1250 | 0.8280 | | 0.1266 | 6.0 | 1500 | 0.8528 | | 0.1266 | 7.0 | 1750 | 0.9454 | | 0.0868 | 8.0 | 2000 | 1.1004 | | 0.0868 | 9.0 | 2250 | 1.2183 | | 0.0608 | 10.0 | 2500 | 1.2702 | | 0.0608 | 11.0 | 2750 | 1.3823 | | 0.0427 | 12.0 | 3000 | 1.4355 | | 0.0427 | 13.0 | 3250 | 1.4961 | | 0.0318 | 14.0 | 3500 | 1.6042 | | 0.0318 | 15.0 | 3750 | 1.6052 | | 0.0271 | 16.0 | 4000 | 1.5435 | | 0.0271 | 17.0 | 4250 | 1.6205 | | 0.0215 | 18.0 | 4500 | 1.6248 | | 0.0215 | 19.0 | 4750 | 1.6113 | | 0.0157 | 20.0 | 5000 | 1.6395 | ### Framework versions - Transformers 4.27.3 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
cleanrl/DemonAttack-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2
cleanrl
2023-03-25T15:01:02Z
0
0
cleanrl
[ "cleanrl", "tensorboard", "DemonAttack-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-25T15:01:00Z
--- tags: - DemonAttack-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: DemonAttack-v5 type: DemonAttack-v5 metrics: - type: mean_reward value: 131623.00 +/- 1986.93 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **DemonAttack-v5** This is a trained model of a PPO agent playing DemonAttack-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id DemonAttack-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id DemonAttack-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 30, 'async_update': 1, 'batch_size': 2400, 'capture_video': False, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'DemonAttack-v5', 'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4', 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 600, 'local_minibatch_size': 300, 'local_num_envs': 30, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 1200, 'num_envs': 120, 'num_minibatches': 2, 'num_steps': 20, 'num_updates': 20833, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 4} ```