Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ model-index:
|
|
21 |
type: OpenAI/Gym/Atari-PongNoFrameskip-v4
|
22 |
metrics:
|
23 |
- type: mean_reward
|
24 |
-
value:
|
25 |
name: mean_reward
|
26 |
---
|
27 |
|
@@ -45,7 +45,7 @@ This is a simple **PPO** implementation to OpenAI/Gym/Atari **PongNoFrameskip-v4
|
|
45 |
git clone https://github.com/opendilab/huggingface_ding.git
|
46 |
pip3 install -e ./huggingface_ding/
|
47 |
# install environment dependencies if needed
|
48 |
-
pip3 install DI-engine[common_env]
|
49 |
```
|
50 |
</details>
|
51 |
|
@@ -138,7 +138,7 @@ push_model_to_hub(
|
|
138 |
github_repo_url="https://github.com/opendilab/DI-engine",
|
139 |
github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/ppo.html",
|
140 |
github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/atari.html",
|
141 |
-
installation_guide="pip3 install DI-engine[common_env]",
|
142 |
usage_file_by_git_clone="./ppo/pong_ppo_deploy.py",
|
143 |
usage_file_by_huggingface_ding="./ppo/pong_ppo_download.py",
|
144 |
train_file="./ppo/pong_ppo.py",
|
@@ -202,7 +202,7 @@ exp_config = {
|
|
202 |
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/PongNoFrameskip-v4-PPO/blob/main/replay.mp4)
|
203 |
<!-- Provide the size information for the model. -->
|
204 |
- **Parameters total size:** 11501.55 KB
|
205 |
-
- **Last Update Date:** 2023-09-
|
206 |
|
207 |
## Environments
|
208 |
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
|
|
|
21 |
type: OpenAI/Gym/Atari-PongNoFrameskip-v4
|
22 |
metrics:
|
23 |
- type: mean_reward
|
24 |
+
value: 19.6 +/- 1.74
|
25 |
name: mean_reward
|
26 |
---
|
27 |
|
|
|
45 |
git clone https://github.com/opendilab/huggingface_ding.git
|
46 |
pip3 install -e ./huggingface_ding/
|
47 |
# install environment dependencies if needed
|
48 |
+
pip3 install DI-engine[common_env,video]
|
49 |
```
|
50 |
</details>
|
51 |
|
|
|
138 |
github_repo_url="https://github.com/opendilab/DI-engine",
|
139 |
github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/ppo.html",
|
140 |
github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/atari.html",
|
141 |
+
installation_guide="pip3 install DI-engine[common_env,video]",
|
142 |
usage_file_by_git_clone="./ppo/pong_ppo_deploy.py",
|
143 |
usage_file_by_huggingface_ding="./ppo/pong_ppo_download.py",
|
144 |
train_file="./ppo/pong_ppo.py",
|
|
|
202 |
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/PongNoFrameskip-v4-PPO/blob/main/replay.mp4)
|
203 |
<!-- Provide the size information for the model. -->
|
204 |
- **Parameters total size:** 11501.55 KB
|
205 |
+
- **Last Update Date:** 2023-09-22
|
206 |
|
207 |
## Environments
|
208 |
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
|