modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 18:27:59
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 520
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 18:27:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
cleanrl/MontezumaRevenge-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2 | cleanrl | 2023-02-23T16:22:42Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"MontezumaRevenge-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:22:41Z | ---
tags:
- MontezumaRevenge-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MontezumaRevenge-v5
type: MontezumaRevenge-v5
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **MontezumaRevenge-v5**
This is a trained model of a PPO agent playing MontezumaRevenge-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id MontezumaRevenge-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id MontezumaRevenge-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'MontezumaRevenge-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
cleanrl/Kangaroo-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3 | cleanrl | 2023-02-23T16:22:40Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Kangaroo-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:22:38Z | ---
tags:
- Kangaroo-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Kangaroo-v5
type: Kangaroo-v5
metrics:
- type: mean_reward
value: 3180.00 +/- 183.30
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Kangaroo-v5**
This is a trained model of a PPO agent playing Kangaroo-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Kangaroo-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Kangaroo-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Kangaroo-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
cleanrl/BattleZone-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3 | cleanrl | 2023-02-23T16:22:18Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"BattleZone-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:22:17Z | ---
tags:
- BattleZone-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BattleZone-v5
type: BattleZone-v5
metrics:
- type: mean_reward
value: 33800.00 +/- 5600.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **BattleZone-v5**
This is a trained model of a PPO agent playing BattleZone-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id BattleZone-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id BattleZone-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'BattleZone-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
cleanrl/MontezumaRevenge-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3 | cleanrl | 2023-02-23T16:22:16Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"MontezumaRevenge-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:22:14Z | ---
tags:
- MontezumaRevenge-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MontezumaRevenge-v5
type: MontezumaRevenge-v5
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **MontezumaRevenge-v5**
This is a trained model of a PPO agent playing MontezumaRevenge-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id MontezumaRevenge-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/MontezumaRevenge-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id MontezumaRevenge-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'MontezumaRevenge-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
cleanrl/Jamesbond-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2 | cleanrl | 2023-02-23T16:22:00Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Jamesbond-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:21:58Z | ---
tags:
- Jamesbond-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Jamesbond-v5
type: Jamesbond-v5
metrics:
- type: mean_reward
value: 7080.00 +/- 2833.39
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Jamesbond-v5**
This is a trained model of a PPO agent playing Jamesbond-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Jamesbond-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Jamesbond-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Jamesbond-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
enlacinglines/SnowballTarget1 | enlacinglines | 2023-02-23T16:21:06Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:21:01Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: enlacinglines/SnowballTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cleanrl/Boxing-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3 | cleanrl | 2023-02-23T16:21:05Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Boxing-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:21:03Z | ---
tags:
- Boxing-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Boxing-v5
type: Boxing-v5
metrics:
- type: mean_reward
value: 93.60 +/- 5.82
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Boxing-v5**
This is a trained model of a PPO agent playing Boxing-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Boxing-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Boxing-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Boxing-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Boxing-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Boxing-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Boxing-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
cleanrl/MsPacman-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1 | cleanrl | 2023-02-23T16:20:53Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"MsPacman-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:20:52Z | ---
tags:
- MsPacman-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MsPacman-v5
type: MsPacman-v5
metrics:
- type: mean_reward
value: 2003.00 +/- 525.40
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **MsPacman-v5**
This is a trained model of a PPO agent playing MsPacman-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id MsPacman-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id MsPacman-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'MsPacman-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
cleanrl/Jamesbond-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1 | cleanrl | 2023-02-23T16:20:50Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Jamesbond-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:20:49Z | ---
tags:
- Jamesbond-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Jamesbond-v5
type: Jamesbond-v5
metrics:
- type: mean_reward
value: 3010.00 +/- 2849.63
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Jamesbond-v5**
This is a trained model of a PPO agent playing Jamesbond-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Jamesbond-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Jamesbond-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Jamesbond-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
cleanrl/MsPacman-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3 | cleanrl | 2023-02-23T16:20:48Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"MsPacman-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:20:47Z | ---
tags:
- MsPacman-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MsPacman-v5
type: MsPacman-v5
metrics:
- type: mean_reward
value: 1389.00 +/- 107.28
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **MsPacman-v5**
This is a trained model of a PPO agent playing MsPacman-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id MsPacman-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/MsPacman-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id MsPacman-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'MsPacman-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
avojarot/Reinforce-2 | avojarot | 2023-02-23T16:19:39Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T15:54:51Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 17.10 +/- 10.93
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
cleanrl/Gravitar-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1 | cleanrl | 2023-02-23T16:19:06Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Gravitar-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:19:04Z | ---
tags:
- Gravitar-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Gravitar-v5
type: Gravitar-v5
metrics:
- type: mean_reward
value: 645.00 +/- 172.41
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Gravitar-v5**
This is a trained model of a PPO agent playing Gravitar-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Gravitar-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Gravitar-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Gravitar-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
mingdinghan/Reinforce-Pixelcopter-PLE-v0 | mingdinghan | 2023-02-23T16:18:53Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:18:50Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 25.90 +/- 14.58
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
cleanrl/NameThisGame-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1 | cleanrl | 2023-02-23T16:18:36Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"NameThisGame-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:18:35Z | ---
tags:
- NameThisGame-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: NameThisGame-v5
type: NameThisGame-v5
metrics:
- type: mean_reward
value: 11754.00 +/- 2183.50
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **NameThisGame-v5**
This is a trained model of a PPO agent playing NameThisGame-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id NameThisGame-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/NameThisGame-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/NameThisGame-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/NameThisGame-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id NameThisGame-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'NameThisGame-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
cleanrl/Berzerk-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1 | cleanrl | 2023-02-23T16:18:19Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Berzerk-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:18:18Z | ---
tags:
- Berzerk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Berzerk-v5
type: Berzerk-v5
metrics:
- type: mean_reward
value: 886.00 +/- 167.10
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Berzerk-v5**
This is a trained model of a PPO agent playing Berzerk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Berzerk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Berzerk-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
tvarella/boballl | tvarella | 2023-02-23T16:18:05Z | 37 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:17:58Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: tvarella/boballl
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cleanrl/Frostbite-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3 | cleanrl | 2023-02-23T16:16:41Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Frostbite-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T16:16:39Z | ---
tags:
- Frostbite-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Frostbite-v5
type: Frostbite-v5
metrics:
- type: mean_reward
value: 311.00 +/- 3.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Frostbite-v5**
This is a trained model of a PPO agent playing Frostbite-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id Frostbite-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Frostbite-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
huggingtweets/chromeeight-elonmusk | huggingtweets | 2023-02-23T16:11:12Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-23T16:09:30Z | ---
language: en
thumbnail: http://www.huggingtweets.com/chromeeight-elonmusk/1677168649061/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1581022168967618560/hek6M_Wq_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matthew Gan & Elon Musk</div>
<div style="text-align: center; font-size: 14px;">@chromeeight-elonmusk</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matthew Gan & Elon Musk.
| Data | Matthew Gan | Elon Musk |
| --- | --- | --- |
| Tweets downloaded | 3099 | 3192 |
| Retweets | 1793 | 169 |
| Short tweets | 150 | 1056 |
| Tweets kept | 1156 | 1967 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/cwdsrhbn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chromeeight-elonmusk's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/bxr6f4ya) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/bxr6f4ya/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chromeeight-elonmusk')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
muhammadravi251001/fine-tuned-IndoNLI-data_translated-with_XLMR | muhammadravi251001 | 2023-02-23T16:10:26Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-23T09:36:52Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine-tuned-IndoNLI-data_translated-with_XLMR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-data_translated-with_XLMR
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2656
- Accuracy: 0.24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5625 | 1.0 | 1 | 1.2656 | 0.24 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
muhammadravi251001/fine-tuned-IndoNLI-data_augmented-with_XLMR | muhammadravi251001 | 2023-02-23T16:09:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-23T16:05:21Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine-tuned-IndoNLI-data_augmented-with_XLMR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-data_augmented-with_XLMR
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1625
- Accuracy: 0.12
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5396 | 1.0 | 1 | 1.1625 | 0.12 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
svjack/mt0-large-comet-atomic-zh-peft-early | svjack | 2023-02-23T16:04:03Z | 0 | 0 | null | [
"text2text-generation",
"zh",
"region:us"
]
| text2text-generation | 2023-02-23T15:05:35Z | ---
language:
- zh
pipeline_tag: text2text-generation
---
```python
#### peft version: '0.2.0.dev0'
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import pandas as pd
import torch
peft_model_id = "svjack/mt0-large-comet-atomic-zh-peft-early"
config = PeftConfig.from_pretrained(peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path)
#### model.device "cuda"
device = "cuda:0"
model = PeftModel.from_pretrained(model, peft_model_id)
model.eval()
print("have load")
NEED_PREFIX = '以下事件有哪些必要的先决条件:'
EFFECT_PREFIX = '下面的事件发生后可能会发生什么:'
INTENT_PREFIX = '以下事件的动机是什么:'
REACT_PREFIX = '以下事件发生后,你有什么感觉:'
event = "X吃了一顿美餐。"
for prefix in [NEED_PREFIX, EFFECT_PREFIX, INTENT_PREFIX, REACT_PREFIX]:
prompt = "{}{}".format(prefix, event)
encode = tokenizer(prompt, return_tensors='pt').to(device)
answer = model.generate(input_ids = encode.input_ids,
max_length = 128,
num_beams=2,
top_p = 0.95,
top_k = 50,
repetition_penalty = 2.5,
length_penalty=1.0,
early_stopping=True,
)[0]
decoded = tokenizer.decode(answer, skip_special_tokens=True)
print(prompt, "\n---答案:", decoded, "----\n")
```
</br>
```json
以下事件有哪些必要的先决条件:X吃了一顿美餐。
---答案: X去超市购物 ----
下面的事件发生后可能会发生什么:X吃了一顿美餐。
---答案: X变胖 ----
以下事件的动机是什么:X吃了一顿美餐。
---答案: X想吃好吃的东西 ----
以下事件发生后,你有什么感觉:X吃了一顿美餐。
---答案: 我可以放松一下 ----
``` |
svjack/mt0-large-comet-atomic-zh-peft-early-cpu | svjack | 2023-02-23T16:02:18Z | 0 | 0 | null | [
"text2text-generation",
"zh",
"region:us"
]
| text2text-generation | 2023-02-23T15:42:25Z | ---
language:
- zh
pipeline_tag: text2text-generation
---
```python
#### peft version: '0.2.0.dev0'
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import pandas as pd
import torch
peft_model_id = "svjack/mt0-large-comet-atomic-zh-peft-early-cpu"
config = PeftConfig.from_pretrained(peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path)
#### model.device "cpu"
device = "cpu"
model = PeftModel.from_pretrained(model, peft_model_id)
model.eval()
print("have load")
NEED_PREFIX = '以下事件有哪些必要的先决条件:'
EFFECT_PREFIX = '下面的事件发生后可能会发生什么:'
INTENT_PREFIX = '以下事件的动机是什么:'
REACT_PREFIX = '以下事件发生后,你有什么感觉:'
event = "X吃了一顿美餐。"
for prefix in [NEED_PREFIX, EFFECT_PREFIX, INTENT_PREFIX, REACT_PREFIX]:
prompt = "{}{}".format(prefix, event)
encode = tokenizer(prompt, return_tensors='pt').to(device)
answer = model.generate(input_ids = encode.input_ids,
max_length = 128,
num_beams=2,
top_p = 0.95,
top_k = 50,
repetition_penalty = 2.5,
length_penalty=1.0,
early_stopping=True,
)[0]
decoded = tokenizer.decode(answer, skip_special_tokens=True)
print(prompt, "\n---答案:", decoded, "----\n")
```
</br>
```json
以下事件有哪些必要的先决条件:X吃了一顿美餐。
---答案: X去超市购物 ----
下面的事件发生后可能会发生什么:X吃了一顿美餐。
---答案: X变胖 ----
以下事件的动机是什么:X吃了一顿美餐。
---答案: X想吃好吃的东西 ----
以下事件发生后,你有什么感觉:X吃了一顿美餐。
---答案: 我可以放松一下 ----
``` |
algocompretto/dqn-SpaceInvadersNoFrameskip-v0 | algocompretto | 2023-02-23T15:57:15Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T15:56:45Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 254.00 +/- 114.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga algocompretto -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga algocompretto -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga algocompretto
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 1e-05),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
snowdere/test_trainer | snowdere | 2023-02-23T15:36:27Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-23T15:27:57Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [FinanceInc/finbert-pretrain](https://huggingface.co/FinanceInc/finbert-pretrain) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 9.5916
- Accuracy: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 9.6334 | 1.0 | 2975 | 9.6220 | 0.0001 |
| 9.6098 | 2.0 | 5950 | 9.6034 | 0.0001 |
| 9.6041 | 3.0 | 8925 | 9.5916 | 0.0001 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Constien/IM_Model | Constien | 2023-02-23T15:27:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-23T15:26:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: IM_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IM_Model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Theju/loso_m07_main_1 | Theju | 2023-02-23T15:27:00Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-23T11:30:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: loso_m07_main_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# loso_m07_main_1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0752
- Wer: 1.62
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.6242 | 0.96 | 500 | 3.4099 | 1.0 |
| 2.6899 | 1.92 | 1000 | 1.6890 | 2.3556 |
| 1.0312 | 2.88 | 1500 | 0.3006 | 1.9356 |
| 0.3173 | 3.84 | 2000 | 0.1852 | 1.7044 |
| 0.1357 | 4.8 | 2500 | 0.1000 | 1.5333 |
| 0.079 | 5.76 | 3000 | 0.0877 | 1.6156 |
| 0.0559 | 6.72 | 3500 | 0.0752 | 1.62 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.1+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
|
avojarot/PyramidsTraining | avojarot | 2023-02-23T15:23:28Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-02-23T15:23:23Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: avojarot/PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sanchit-gandhi/whisper-small-ru-1k-steps | sanchit-gandhi | 2023-02-23T15:22:00Z | 158 | 3 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"ru",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-12-12T03:07:17Z | ---
language:
- ru
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Russian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 ru
type: mozilla-foundation/common_voice_11_0
config: ru
split: test
args: ru
metrics:
- name: Wer
type: wer
value: 12.883608587437623
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Russian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 ru dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2179
- Wer: 12.8836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0637 | 1.4 | 1000 | 0.2179 | 12.8836 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 2.0.0.dev20221210+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
avojarot/Reinforce-1 | avojarot | 2023-02-23T15:20:10Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T15:19:58Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 490.80 +/- 27.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Anon1216/kgflora | Anon1216 | 2023-02-23T14:57:29Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-23T14:54:49Z | ---
license: creativeml-openrail-m
---
|
Leonhard17/ppo-SnowballTarget | Leonhard17 | 2023-02-23T14:42:57Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-23T14:42:51Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Leonhard17/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
J3/Reinforce-CartPole-v1 | J3 | 2023-02-23T14:42:06Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T14:41:58Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Tublean/NobelDifusion | Tublean | 2023-02-23T14:40:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-23T14:29:09Z | ---
license: creativeml-openrail-m
---
|
michalcisek5/dqn-SpaceInvadersNoFrameskip-v4 | michalcisek5 | 2023-02-23T14:35:19Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T14:34:40Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 559.00 +/- 81.45
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga michalcisek5 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga michalcisek5 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga michalcisek5
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
jamesthong/a2c-PandaReachDense-v2 | jamesthong | 2023-02-23T14:11:36Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T14:09:13Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.41 +/- 0.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
muhammadravi251001/fine-tuned-IndoNLI-data_train-with_XLMR | muhammadravi251001 | 2023-02-23T14:04:25Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-22T15:16:42Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine-tuned-IndoNLI-data_train-with_XLMR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-data_train-with_XLMR
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1820
- Accuracy: 0.12
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5341 | 1.0 | 1 | 1.1820 | 0.12 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
jakub014/bert-base-uncased-finetuned-sufficiency-ukp-balanced | jakub014 | 2023-02-23T14:01:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-23T13:54:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-sufficiency-ukp-balanced
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sufficiency-ukp-balanced
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1493
- Accuracy: 0.9559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 69 | 0.2807 | 0.9007 |
| No log | 2.0 | 138 | 0.1804 | 0.9338 |
| No log | 3.0 | 207 | 0.1493 | 0.9559 |
| No log | 4.0 | 276 | 0.1558 | 0.9559 |
| No log | 5.0 | 345 | 0.1601 | 0.9559 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Wikked/q-FrozenLake-v1-4x4-noSlippery | Wikked | 2023-02-23T13:59:32Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T13:57:43Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Wikked/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
threite/ppo-LunarLander-v2-self | threite | 2023-02-23T13:53:26Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T13:30:11Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -99.80 +/- 46.43
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'gym_id': 'LunarLander-v2'
'learning_rate': 0.00025
'seed': 1
'total_timesteps': 50000
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'threite/ppo-LunarLander-v2-self'
'batch_size': 512
'minibatch_size': 128
'env_id': 'LunarLander-v2'}
```
|
muhammadravi251001/fine-tuned-IndoNLI-data_train-with_IndoNLU-Large-V2 | muhammadravi251001 | 2023-02-23T13:53:11Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-22T15:09:36Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine-tuned-IndoNLI-data_train-with_IndoNLU-Large-V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-data_train-with_IndoNLU-Large-V2
This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3468
- Accuracy: 0.12
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.561 | 1.0 | 1 | 1.3468 | 0.12 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
pabloac31/poca-SoccerTwos | pabloac31 | 2023-02-23T13:52:35Z | 27 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-23T13:52:28Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: pabloac31/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
wimvanhenden/blade-runner-2049-v1 | wimvanhenden | 2023-02-23T13:44:17Z | 0 | 7 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-15T15:47:55Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: bldrnrst
---
### Blade Runner 2049 v1 Dreambooth model trained by wimvanhenden with the v1-5 base model
Use bldrnrst as prompt prefix
bldrnrst, a photo of a man with blood on his face

bldrnrst, a photo of a woman with blood on her face

Results:

Sample pictures of training set:

|
Leonhard17/Reinforce-Pixelcopter-PLE-v0 | Leonhard17 | 2023-02-23T13:36:50Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T13:36:47Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 37.40 +/- 15.70
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jamesthong/a2c-AntBulletEnv-v0 | jamesthong | 2023-02-23T13:18:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T13:17:03Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1919.78 +/- 335.18
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Alsebay/PeachMixs | Alsebay | 2023-02-23T13:08:27Z | 0 | 19 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-13T05:12:01Z | ---
license: creativeml-openrail-m
---
---
My pixiv: https://www.pixiv.net/en/users/75679841
[Civitai version of PeachUltima](https://civitai.com/models/12048/peachmixs-ultima-version)
# PeachMix
"PeachMix" (Oldname: RadiationFruitMix) are various merge models base on a lot of different models.
These are a lot of things in FAQ, you should check it first.
---
# Tabble of Contents (Make for easy searching :3 )
- [PeachMix](#peachmix)
- [License](#license)
- [Disclaimer](#disclaimer)
- [Use Models](#use-models)
- [Model Detail](#model-detail)
- [PeachMix - Tachyon version](#peachmix---tachyon-version)
- [PeachMix - Ultima version](#peachmix---ultima-version)
- [PeachMix V1](#peachmix-v1)
- [C.A.M(PeachMix1 V1)](#campeachmix1-v1)
- [PeachMix2 V1](#peachmix2-v1)
- [PeachMix3 V1](#peachmix3-v1)
- [PeachMix V2](#peachmix-v2)
- [C.A.M 2(PeachMix1 V2)](#cam-2peachmix1-v2)
- [PeachMix2 V2](#peachmix2-v2)
- [PeachMix3 V2](#peachmix3-v2)
- [PeachMix4 V2](#peachmix4-v2)
- [PeachMix5 V2](#peachmix5-v2)
- [PeachMix V3](#peachmix-v3)
- [PeachMix2 V3](#peachmix2-v3)
- [Fix Version](#peachmix2-v3-fix-version)
- [FAQ](#faq)
---
# License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
(Full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license)
# Disclaimer
- Creation of SFW and NSFW images is user's decision, user has complete control over to generate NSFW content whether or not.
- This model is not a model created to publish NSFW content in public places.
- Some prompts for generating these images were form my friends and I have been allowed for using them.
---
# Use Models
* V1, V2 ,V3
- Mostly use: [Chilloutmix-Ni](https://civitai.com/models/6424/chilloutmix)
- Best model have ever use: [AbyssOrangeMix2 (AOM2)](https://huggingface.co/WarriorMama777/OrangeMixs#abyssorangemix2-aom2)
- Anything series:
- [V3](https://huggingface.co/Linaqruf/anything-v3.0)
- [V4 & V4.5](https://huggingface.co/andite/anything-v4.0)
* PeachMixs - Ultimate & Tachyon version
- Mostly use:
- [AbyssOrangeMix2 (AOM2)](https://huggingface.co/WarriorMama777/OrangeMixs#abyssorangemix2-aom2)
- [basil_mix](https://huggingface.co/nuigurumi/basil_mix)
- [AniDos v1](https://civitai.com/models/6437/anidosmix)
- [DosMix](https://civitai.com/models/6250/dosmix)
- Anything series:
- [V4 & V4.5](https://huggingface.co/andite/anything-v4.0)
How to use:
- VAE: anything you want use 😉( but I recommend kl f8 anime, Orangesmix VAE (NAI) and sd-vae-ft-mse)
- Prompt: As simple as good.
- Negative prompt: (worst quality, low quality:1.4), (monochrome:1.1), [(bad_prompt_Version2:0.8)](https://huggingface.co/datasets/Nerfgun3/bad_prompt)
- Sampler: DPM++ SDE Karras
- Steps: DDIM 50~ , SDE Karass 20~
- Clipskip: 1 (you can try 2)
- Upscaler : Latent (nearest-exact)
- CFG Scale : 5 -7 is good (4~8)
- Denoise strength: 0.5 maybe
# Model Detail
AOM2 in here I use is _nsfw version (AbyssOrangeMix2_nfsw).
## PeachMix - Tachyon Version
Another PeachMix version that perform the better quality than V1, V2, V3.
- Use model:
- V1:
- AOM2 FP16 (Model A)
- Basil + Anidos (Model B)
- V2:
- V1
- Anything 4.0 & 4.5
- [Dalcefo](https://civitai.com/models/5396/dalcefov3painting)
## PeachMix - Ultima Version
Same like Tachton Version but aim to more realistic.
<details>
<summary>Sample Images:</summary>
- V1:
<img src="https://huggingface.co/Alsebay/PeachMixs/resolve/main/Sample-Image/PeachUltima-sample-main.png" wight="" height="">
- V2:
<img src="https://huggingface.co/Alsebay/PeachMixs/resolve/main/Sample-Image/PeachUltima2-sample-main.png" wight="" height="">
Prompt here:
```
Prompt: (extremely detailed CG unity 8k wallpaper), 8k,4k, (highres_1.1), best quality, (masterpiece_1.3), ([realistic::0.75]), vivid color, 1girl at center, solo, (ultra-detailed),medium breast, (beautiful eyes), looking to viewer, (((white ao dai))), cowboy shot, one eye open, open mouth, slime, water,
Negative prompt: (worst quality:1.4), (low quality:1.4), (monochrome:1.1), (bad_prompt_version2:0.8), nsfw,
```
</details>
- Use model:
- V1:
- AOM2 FP16 (Model A)
- Basil + Anidos + Dos (Model B)
- V2:
- V1
- Anything 4.0 & 4.5
- Dalcefo
## PeachMix V1
FP16 x FP32 mix
### C.A.M(PeachMix1 V1)
C.A.M stand for Chillout AbyssOrange Mix. Using same way like AOM2.
- Use model:
- AOM2 FP16 (Model A)
- Chillout-Ni (Model B)
### PeachMix2 V1
* About: Try mimic AOM2 but use differemt recipe.
- Use model:
- Anything v3 FP16 (Model A)
- Chillout-Ni (Model B)
### PeachMix3 V1
* About: Like PeachMix2 but enhance ver.
- Use model:
- Anything v4 FP16 (Model A)
- Chillout-Ni (Model B)
## PearMiX V2
FP16 x FP16 mix
### C.A.M 2(PeachMix1 V2)
- Use model:
- AOM2 FP16 (Model A)
- Chillout-Ni FP16(Model B)
### PeachMix2 V2
- Use model:
- Anything v3 FP16 (Model A)
- Chillout-Ni FP16 (Model B)
### PeachMix3 V2
- Use model:
- Anything v4 FP16 (Model A)
- Chillout-Ni FP16 (Model B)
### PeachMix4 V2
* About: enhance ver of PeachMix3 V2.
- Use model:
- Anything v4.5 FP16 (Model A)
- Chillout-Ni FP16 (Model B)
-
### PeachMix5 V2
* About: Unique model that merge in different way.
- Use model:
- AOM2 FP16 (Model A)
- Chillout-Ni FP16(Model B)
- Anything v4.5 FP16 (Model C)
## PeachMix V3
FP32 x FP132 mix. Aim for see different about fp16 and fp32 mix. Aborting now.
### PeachMix2 V3
- Use model:
- Anything v3 FP32 (Model A)
- Chillout-Ni FP32 (Model B)
#### PeachMix2 V3 Fix version
A fix version of it, due to missing some CLIP, wrong place. Fixed model is Chillout-Ni.
---
# FAQ
- Q: Where is example picture?
- A: Sorry don't have time now. ._.
- Q: why some model don't appeared in older/newer ver
- A: sorry I don't have free time -_-.
- Q: Does V1 V2 V3 different?
- A: Yes, maybe.
Here is exmaple (V2, V1, V3)
<img src="https://files.catbox.moe/4geyk8.png" width="1024" height="">
```
Prompt: 1girl, school uniform, standing, looking back at viewer, cherry blossom,
Negative prompt: worst quality:1.4), (low quality:1.4), (monochrome:1.1), (bad_prompt_version2:0.8),
```
Another here:
<img src="https://files.catbox.moe/f0v76m.png" width="1024" height="">
```
Prompt: 1girl, blue bikini, lying, looking at viewer, ocean, from above,
Negative prompt: worst quality:1.4), (low quality:1.4), (monochrome:1.1), (bad_prompt_version2:0.8),
```
- Q: About fix version,...?
- A: Here is it
<img src="https://files.catbox.moe/4cmsdi.png" width="1024" height=""> |
Dabid/test3 | Dabid | 2023-02-23T13:04:16Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-23T12:02:37Z | ---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: test3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test3
This model is a fine-tuned version of [jcblaise/bert-tagalog-base-cased](https://huggingface.co/jcblaise/bert-tagalog-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3960
- Accuracy: 0.8683
- Precision: 0.8316
- Recall: 0.8653
- F1: 0.8481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 151 | 0.3770 | 0.8431 | 0.8287 | 0.7951 | 0.8115 |
| No log | 2.0 | 302 | 0.3561 | 0.8528 | 0.7959 | 0.8790 | 0.8354 |
| No log | 3.0 | 453 | 0.3425 | 0.8647 | 0.8636 | 0.8094 | 0.8356 |
| 0.3579 | 4.0 | 604 | 0.3541 | 0.8615 | 0.8090 | 0.8824 | 0.8441 |
| 0.3579 | 5.0 | 755 | 0.3717 | 0.8611 | 0.8075 | 0.8836 | 0.8438 |
| 0.3579 | 6.0 | 906 | 0.3657 | 0.8691 | 0.8352 | 0.8619 | 0.8483 |
| 0.1703 | 7.0 | 1057 | 0.3826 | 0.8700 | 0.8370 | 0.8619 | 0.8493 |
| 0.1703 | 8.0 | 1208 | 0.3960 | 0.8683 | 0.8316 | 0.8653 | 0.8481 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
research-backup/flan-t5-xl-analogy | research-backup | 2023-02-23T12:54:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-13T01:20:34Z |
---
widget:
- text: "generate analogy: mammal is to whale"
example_title: "Analogy Example 1 (semantic relation)"
- text: "generate analogy: wedding is to marriage"
example_title: "Analogy Example 1 (semantic relation, metaphor)"
- text: "generate analogy: London is to U.K."
example_title: "Analogy Example 2 (entity)"
- text: "generate analogy: actual is to actually"
example_title: "Analogy Example 3 (morphological)"
---
# relbert/flan-t5-xl-analogy
This is [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity)
for analogy generation, which is to generate a word pair (eg. `bird is to crow`) given a query (eg. `mammal is to whale`)
so that the query and the generated word pair form an analogy statement.
### Usage
```python
from transformers import pipeline
pipe = pipeline('text2text-generation', model="relbert/flan-t5-xl-analogy")
output = pipe("generate analogy: mammal is to whale")
print(output)
>>> [{'generated_text': 'bird is to crow'}]
```
|
adhisetiawan/LunarLander-v2 | adhisetiawan | 2023-02-23T12:49:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T03:02:32Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 287.95 +/- 13.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mwissing/ppo-SnowballTarget | mwissing | 2023-02-23T12:46:31Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-23T12:46:26Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: mwissing/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
research-backup/t5-3b-analogy | research-backup | 2023-02-23T12:42:49Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-23T12:32:33Z |
---
widget:
- text: "generate analogy: mammal is to whale"
example_title: "Analogy Example 1 (semantic relation)"
- text: "generate analogy: wedding is to marriage"
example_title: "Analogy Example 1 (semantic relation, metaphor)"
- text: "generate analogy: London is to U.K."
example_title: "Analogy Example 2 (entity)"
- text: "generate analogy: actual is to actually"
example_title: "Analogy Example 3 (morphological)"
---
# relbert/t5-3b-analogy
This is [t5-3b](https://huggingface.co/t5-3b) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity)
for analogy generation, which is to generate a word pair (eg. `bird is to crow`) given a query (eg. `mammal is to whale`)
so that the query and the generated word pair form an analogy statement.
### Usage
```python
from transformers import pipeline
pipe = pipeline('text2text-generation', model="relbert/t5-3b-analogy")
output = pipe("generate analogy: mammal is to whale")
print(output)
>>> [{'generated_text': 'bird is to crow'}]
```
|
tayfen/ppo_LL_default | tayfen | 2023-02-23T12:31:23Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T12:31:12Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -124.18 +/- 48.00
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'tayfen/ppo_LL_default'
'batch_size': 512
'minibatch_size': 128}
```
|
priecar/TFG-summarization-1-epoch | priecar | 2023-02-23T12:25:12Z | 4 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-22T09:00:26Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: priecar/TFG-summarization-1-epoch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# priecar/TFG-summarization-1-epoch
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8463
- Validation Loss: 1.9840
- Train Rouge1: 19.4867
- Train Rouge2: 9.3173
- Train Rougel: 17.0674
- Train Rougelsum: 17.9128
- Train Gen Len: 18.9860
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 1.8463 | 1.9840 | 19.4867 | 9.3173 | 17.0674 | 17.9128 | 18.9860 | 0 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.0
- Tokenizers 0.13.2
|
algocompretto/q-FrozenLake-v1-4x4-noSlippery | algocompretto | 2023-02-23T12:18:37Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T12:18:33Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="algocompretto/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
smartik/mt5-small-finetuned-xsum | smartik | 2023-02-23T11:56:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-23T11:26:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-xsum
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0946
- Rouge2: 0.0
- Rougel: 0.0918
- Rougelsum: 0.0925
- Gen Len: 3.8798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 2046 | nan | 0.0946 | 0.0 | 0.0918 | 0.0925 | 3.8798 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Arch4ngel/poca-SoccerTwos | Arch4ngel | 2023-02-23T11:52:02Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-23T11:51:48Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Arch4ngel/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Falah/shanasheel-baghdad | Falah | 2023-02-23T11:47:10Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-20T03:28:51Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### shanasheel-baghdad called in Arabic (شناشيل بغداد) model trained by Falah.G.Salieh .
## You can visit my blog: https://iraqprogrammer.wordpress.com/
## Or FB: https://web.facebook.com/falahgs
## Email: [email protected]
With Stable Diffusion, we can now create artificial intelligence art generation images using our trained images.
In this template we can create images of old Baghdad houses with old balconies called Shanasheel called in Arabic (shanasheel - شناشيل) or Old Baghdad Houses which is or anything you can think of in concept testing via A1111 Colab fast-Colab -A1111
Sample images of this concept with simple and easy prompts:
Any prompt and add abaya style word:
Prompt: Sample images of this concept with simple and easy prompts:
any prompt and add shanasheel-baghdad style word




|
OmarAlsaabi/distilbert-base-uncased-finetuned-cola | OmarAlsaabi | 2023-02-23T11:46:21Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-23T10:13:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5434531271960991
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7983
- Matthews Correlation: 0.5435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5255 | 1.0 | 535 | 0.5243 | 0.4122 |
| 0.3496 | 2.0 | 1070 | 0.5007 | 0.5029 |
| 0.2339 | 3.0 | 1605 | 0.5811 | 0.5206 |
| 0.1826 | 4.0 | 2140 | 0.7680 | 0.5174 |
| 0.1346 | 5.0 | 2675 | 0.7983 | 0.5435 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
roscazo/distemist_NER_test | roscazo | 2023-02-23T11:45:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-23T11:28:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distemist_NER_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distemist_NER_test
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0927
- Diso Precision: 0.7135
- Diso Recall: 0.7799
- Diso F1: 0.7452
- Diso Number: 1440
- Overall Precision: 0.7135
- Overall Recall: 0.7799
- Overall F1: 0.7452
- Overall Accuracy: 0.9760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Diso Precision | Diso Recall | Diso F1 | Diso Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0992 | 1.0 | 1169 | 0.0778 | 0.6166 | 0.7639 | 0.6824 | 1440 | 0.6166 | 0.7639 | 0.6824 | 0.9705 |
| 0.0603 | 2.0 | 2338 | 0.0721 | 0.6867 | 0.7840 | 0.7322 | 1440 | 0.6867 | 0.7840 | 0.7322 | 0.9757 |
| 0.0371 | 3.0 | 3507 | 0.0812 | 0.7182 | 0.7736 | 0.7449 | 1440 | 0.7182 | 0.7736 | 0.7449 | 0.9764 |
| 0.0198 | 4.0 | 4676 | 0.0927 | 0.7135 | 0.7799 | 0.7452 | 1440 | 0.7135 | 0.7799 | 0.7452 | 0.9760 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
lozhnikov/ppo-LunarLander-v2 | lozhnikov | 2023-02-23T11:40:37Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T11:37:47Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO (Mlp)
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.36 +/- 21.36
name: mean_reward
verified: false
---
# **PPO (Mlp)** Agent playing **LunarLander-v2**
This is a trained model of a **PPO (Mlp)** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Hawk91/a2c-AntBulletEnv-v0 | Hawk91 | 2023-02-23T11:39:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T11:37:47Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1362.12 +/- 65.49
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Falah/dishdasha | Falah | 2023-02-23T11:21:55Z | 5 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-23T08:59:31Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### dishdasha clothes called in Arabic (دشداشة ) Dreambooth model trained by Falah. G.Saleih
## u can visit my blog : https://iraqprogrammer.wordpress.com/
## or FB: https://web.facebook.com/falahgs
email: [email protected]
With Stable Diffusion, we can now create AI art generation images using our own trained images. in this model, we can generate images of women wearing dress clothes called in Arabic (dishdasha) or (دشداشة ) it is interior home ,wearing clothes for Arabic women ones as popular images, or just about anything you can think of Test the concept via A1111 Colab fast-Colab-A1111
Sample pictures of this concept with simple and easy prompts:
any prompt and add word dishdasha style :
prompt:







|
Horken/q-FrozenLake-v1-4x4-Slippery | Horken | 2023-02-23T11:03:32Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T11:03:30Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Horken/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Umesh/police-lethal-force-classifier | Umesh | 2023-02-23T10:53:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-23T08:04:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
model-index:
- name: police-lethal-force-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# police-lethal-force-classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0087
- Accuracy: 0.9980
- F1-score: 0.9964
- Recall: 0.9965
- Precision: 0.9963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-score | Recall | Precision |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:------:|:---------:|
| 0.0138 | 1.0 | 12050 | 0.0132 | 0.9973 | 0.9951 | 0.9953 | 0.9949 |
| 0.0091 | 2.0 | 24100 | 0.0087 | 0.9980 | 0.9964 | 0.9965 | 0.9963 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
tasinhoque/roberta-large-go-emotions-3 | tasinhoque | 2023-02-23T10:52:49Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:go_emotions",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-23T07:56:41Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- f1
model-index:
- name: roberta-large-go-emotions-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: multilabel_classification
config: simplified
split: test
args: simplified
metrics:
- name: F1
type: f1
value: 0.5180
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: multilabel_classification
config: simplified
split: validation
args: simplified
metrics:
- name: F1
type: f1
value: 0.5203
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-go-emotions-2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [go_emotions](https://huggingface.co/datasets/go_emotions) dataset. It achieves the following results on the test set (with a threshold of 0.15):
- Accuracy: 0.44020
- Precision: 0.5041
- Recall: 0.5461
- F1: 0.5180
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Validation Loss | Accuracy | Precision | Recall | F1 |
| ------------- | ----- | --------------- | -------- | --------- | ------ | ------ |
| No log | 1.0 | 0.0889 | 0.4043 | 0.4807 | 0.4568 | 0.4446 |
| 0.1062 | 2.0 | 0.0828 | 0.4113 | 0.4608 | 0.5363 | 0.4868 |
| 0.1062 | 3.0 | 0.0813 | 0.4201 | 0.5198 | 0.5612 | 0.5227 |
| No log | 4.0 | 0.0862 | 0.4292 | 0.5012 | 0.5558 | 0.5208 |
| 0.0597 | 5.0 | 0.0924 | 0.4329 | 0.5164 | 0.5362 | 0.5151 |
| 0.0597 | 6.0 | 0.0956 | 0.4445 | 0.5241 | 0.5328 | 0.5161 |
| No log | 7.0 | 0.0962 | 0.4648 | 0.5138 | 0.5277 | 0.5151 |
| 0.0458 | 8.0 | 0.0962 | 0.4462 | 0.5257 | 0.5270 | 0.5203 |
| 0.0458 | 9.0 | 0.1029 | 0.4432 | 0.5076 | 0.5249 | 0.5111 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
kylzer/asr_skripsi_colab_common_voice | kylzer | 2023-02-23T10:50:20Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-12-19T06:00:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: asr_skripsi_colab_common_voice
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 0.36856617647058826
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asr_skripsi_colab_common_voice
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3839
- Wer: 0.3686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.4354 | 3.64 | 400 | 1.9595 | 1.0 |
| 0.7227 | 7.27 | 800 | 0.4532 | 0.5039 |
| 0.3293 | 10.91 | 1200 | 0.4277 | 0.4425 |
| 0.2298 | 14.55 | 1600 | 0.3947 | 0.4182 |
| 0.1789 | 18.18 | 2000 | 0.3960 | 0.4009 |
| 0.1496 | 21.82 | 2400 | 0.3793 | 0.3848 |
| 0.122 | 25.45 | 2800 | 0.3794 | 0.3795 |
| 0.1056 | 29.09 | 3200 | 0.3839 | 0.3686 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
JYC333/PPO-LunarLander-unit8 | JYC333 | 2023-02-23T10:35:51Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T08:46:16Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 118.93 +/- 110.57
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
thanhuitha/dreembooth_ckpt_lora | thanhuitha | 2023-02-23T10:29:59Z | 3 | 1 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-02-23T10:22:18Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - thanhuitha/dreembooth_ckpt_lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




|
moodlep/rl_course_vizdoom_health_gathering_supreme | moodlep | 2023-02-23T10:09:04Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T10:08:53Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.12 +/- 5.77
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r moodlep/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
happycoding/sac-PandaReachDense-v2 | happycoding | 2023-02-23T10:07:15Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T07:09:16Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.29 +/- 0.08
name: mean_reward
verified: false
---
# **SAC** Agent playing **PandaReachDense-v2**
This is a trained model of a **SAC** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
research-backup/flan-t5-xl-analogy-nell | research-backup | 2023-02-23T09:48:16Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-23T09:33:43Z |
---
widget:
- text: "generate analogy: mammal is to whale"
example_title: "Analogy Example 1 (semantic relation)"
- text: "generate analogy: wedding is to marriage"
example_title: "Analogy Example 1 (semantic relation, metaphor)"
- text: "generate analogy: London is to U.K."
example_title: "Analogy Example 2 (entity)"
- text: "generate analogy: actual is to actually"
example_title: "Analogy Example 3 (morphological)"
---
# relbert/flan-t5-xl-analogy-nell
This is [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) fine-tuned on [relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity)
for analogy generation, which is to generate a word pair (eg. `bird is to crow`) given a query (eg. `mammal is to whale`)
so that the query and the generated word pair form an analogy statement.
### Usage
```python
from transformers import pipeline
pipe = pipeline('text2text-generation', model="relbert/flan-t5-xl-analogy-nell")
output = pipe("generate analogy: mammal is to whale")
print(output)
>>> [{'generated_text': 'bird is to crow'}]
```
|
BeardedJohn/bert-finetuned-ner-per-v8 | BeardedJohn | 2023-02-23T09:46:44Z | 4 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-23T09:46:26Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-finetuned-ner-per-v8
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-per-v8
This model is a fine-tuned version of [BeardedJohn/bert-finetuned-ner-ubb-conll-endava-only-misc-v2](https://huggingface.co/BeardedJohn/bert-finetuned-ner-ubb-conll-endava-only-misc-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 846, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Tritkoman/EnglishtoOldEnglishV1 | Tritkoman | 2023-02-23T09:30:22Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain",
"translation",
"en",
"es",
"dataset:Tritkoman/autotrain-data-oldenglish",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-02-23T09:25:55Z | ---
tags:
- autotrain
- translation
language:
- en
- es
datasets:
- Tritkoman/autotrain-data-oldenglish
co2_eq_emissions:
emissions: 7.273007332989732
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 3679898272
- CO2 Emissions (in grams): 7.2730
## Validation Metrics
- Loss: 4.128
- SacreBLEU: 0.545
- Gen len: 25.544 |
besa2001/rl_course_vizdoom_health_gathering_supreme | besa2001 | 2023-02-23T09:30:15Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T09:30:04Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.79 +/- 4.30
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r besa2001/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
michal512/poca-SoccerTwos | michal512 | 2023-02-23T09:27:39Z | 327 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-23T09:25:50Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: michal512/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
OCRLAB/cn-blip-base | OCRLAB | 2023-02-23T09:23:29Z | 0 | 0 | null | [
"zh",
"license:bsd-3-clause",
"region:us"
]
| null | 2023-02-23T09:15:30Z | ---
license: bsd-3-clause
language:
- zh
--- |
Hawk91/Pyramids | Hawk91 | 2023-02-23T09:09:42Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-02-23T09:09:36Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: Hawk91/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
toinsson/poca-SoccerTwos-Base | toinsson | 2023-02-23T09:07:15Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-23T09:07:06Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: toinsson/poca-SoccerTwos-Base
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jacksee/ppo-LunarLander-v2 | jacksee | 2023-02-23T08:59:27Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T08:58:53Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -69.89 +/- 95.05
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Sjdan/mst_1 | Sjdan | 2023-02-23T08:49:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-23T01:16:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mst_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mst_1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0038
- Wer: 0.9915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 11.0362 | 1.15 | 500 | 2.7219 | 1.0 |
| 1.9536 | 2.29 | 1000 | 0.5060 | 2.0274 |
| 0.2347 | 3.44 | 1500 | 0.0268 | 1.0462 |
| 0.0633 | 4.59 | 2000 | 0.0078 | 1.0 |
| 0.0359 | 5.73 | 2500 | 0.0047 | 0.9949 |
| 0.014 | 6.88 | 3000 | 0.0038 | 0.9915 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.1+cu116
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Inesence/donut-base-lvtest | Inesence | 2023-02-23T08:38:08Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2023-02-23T08:27:19Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-lvtest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-lvtest
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
smko77/dqn-SpaceInvadersNoFrameskip-v4 | smko77 | 2023-02-23T08:37:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T08:36:54Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 501.00 +/- 177.03
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga smko77 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga smko77 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga smko77
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
eugene-d/a2c-PandaReachDense-v2 | eugene-d | 2023-02-23T08:21:56Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-21T20:17:22Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.54 +/- 0.79
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Hawk91/SnowballTarget-ppo | Hawk91 | 2023-02-23T08:14:26Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-23T08:14:19Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Hawk91/SnowballTarget-ppo
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sgoodfriend/ppo-procgen-bossfight-easy | sgoodfriend | 2023-02-23T07:49:40Z | 0 | 0 | rl-algo-impls | [
"rl-algo-impls",
"procgen-bossfight-easy",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-23T07:49:35Z | ---
library_name: rl-algo-impls
tags:
- procgen-bossfight-easy
- ppo
- deep-reinforcement-learning
- reinforcement-learning
model-index:
- name: ppo
results:
- metrics:
- type: mean_reward
value: 9.91 +/- 5.37
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: procgen-bossfight-easy
type: procgen-bossfight-easy
---
# **PPO** Agent playing **procgen-bossfight-easy**
This is a trained model of a **PPO** agent playing **procgen-bossfight-easy** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/f3w1hwyb.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [21ee1ab](https://github.com/sgoodfriend/rl-algo-impls/tree/21ee1ab96a186676e5ed2f8c3185902f7c7bca7a). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:----------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | bossfight | 1 | 8.03125 | 6.37125 | 64 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/ok9cp59v) |
| ppo | bossfight | 2 | 9.90625 | 5.36982 | 64 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/goavynh9) |
| ppo | bossfight | 3 | 8.98438 | 5.94635 | 64 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/b5yxrur0) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[21ee1ab](https://github.com/sgoodfriend/rl-algo-impls/tree/21ee1ab96a186676e5ed2f8c3185902f7c7bca7a).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/goavynh9
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [21ee1ab](https://github.com/sgoodfriend/rl-algo-impls/tree/21ee1ab96a186676e5ed2f8c3185902f7c7bca7a). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env procgen-bossfight-easy --seed 2
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/f3w1hwyb were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
batch_size: 2048
clip_range: 0.2
clip_range_vf: 0.2
ent_coef: 0.01
gae_lambda: 0.95
gamma: 0.999
learning_rate: 0.0005
n_epochs: 3
n_steps: 256
vf_coef: 0.5
env: procgen-bossfight-easy
env_hyperparams:
is_procgen: true
make_kwargs:
distribution_mode: easy
n_envs: 64
normalize: true
env_id: bossfight
eval_params:
deterministic: false
ignore_first_episode: true
n_timesteps: 25000000
policy_hyperparams:
activation_fn: relu
cnn_feature_dim: 256
cnn_layers_init_orthogonal: false
cnn_style: impala
init_layers_orthogonal: true
seed: 2
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_21ee1ab
- host_138-2-238-100
```
|
trinket2023/BERTModelQA2 | trinket2023 | 2023-02-23T07:43:21Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-23T06:24:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: BERTModelQA2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTModelQA2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1894
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7749 | 1.0 | 563 | 1.6499 |
| 1.3956 | 2.0 | 1126 | 1.4280 |
| 1.0094 | 3.0 | 1689 | 1.4128 |
| 0.7522 | 4.0 | 2252 | 1.5635 |
| 0.5826 | 5.0 | 2815 | 1.6302 |
| 0.4356 | 6.0 | 3378 | 1.7976 |
| 0.3399 | 7.0 | 3941 | 1.9001 |
| 0.2234 | 8.0 | 4504 | 2.0518 |
| 0.1806 | 9.0 | 5067 | 2.1244 |
| 0.1543 | 10.0 | 5630 | 2.1894 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
LowRAs/rpgLoRA | LowRAs | 2023-02-23T07:27:14Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-23T07:15:52Z | ---
license: creativeml-openrail-m
---
|
harisumant/ppo-Pyramids | harisumant | 2023-02-23T07:14:17Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-02-23T07:14:11Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: harisumant/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
zambezivoice/xls-r-300m-toi-nst | zambezivoice | 2023-02-23T07:12:17Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-23T03:36:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: xls-r-300m-toi-nst
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-toi-nst
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9700
- Wer: 0.8674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1632 | 0.3 | 500 | 0.4058 | 0.6069 |
| 0.7323 | 0.59 | 1000 | 0.2550 | 0.4421 |
| 0.6532 | 0.89 | 1500 | 0.2360 | 0.3812 |
| 0.592 | 1.18 | 2000 | 0.2007 | 0.3412 |
| 0.57 | 1.48 | 2500 | 0.1979 | 0.3382 |
| 0.5558 | 1.77 | 3000 | 0.1853 | 0.2995 |
| 0.5451 | 2.07 | 3500 | 0.1887 | 0.3151 |
| 0.5451 | 2.36 | 4000 | 0.5467 | 0.5383 |
| 1.3259 | 2.66 | 4500 | 0.9700 | 0.8674 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
LowRAs/nedLoRa | LowRAs | 2023-02-23T07:10:28Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-23T04:32:12Z | ---
license: creativeml-openrail-m
---
|
smartbotfactory/dqn-SpaceInvadersNoFrameskip-v4 | smartbotfactory | 2023-02-23T07:00:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-22T12:25:44Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 558.00 +/- 101.42
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga smartbotfactory -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga smartbotfactory -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga smartbotfactory
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
eichiuehara/distilroberta-base-finetuned-wikitext2 | eichiuehara | 2023-02-23T06:49:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-23T00:59:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0852 | 1.0 | 2406 | 1.9225 |
| 1.993 | 2.0 | 4812 | 1.8837 |
| 1.9616 | 3.0 | 7218 | 1.8234 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Anon1216/kdllora-v1.5 | Anon1216 | 2023-02-23T06:49:27Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-23T06:44:47Z | ---
license: creativeml-openrail-m
---
This is korean doll likeness lora from civitai. I'm not the creator, credits go to https://civitai.com/user/Kbr |
LowRAs/realisticvisionLoRa | LowRAs | 2023-02-23T06:44:32Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-23T04:33:09Z | ---
license: creativeml-openrail-m
---
|
Leo97/KcELECTRA-small-v2022-finetuned-apeach | Leo97 | 2023-02-23T06:29:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-23T02:58:33Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: KcELECTRA-small-v2022-finetuned-apeach
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KcELECTRA-small-v2022-finetuned-apeach
This model is a fine-tuned version of [beomi/KcELECTRA-small-v2022](https://huggingface.co/beomi/KcELECTRA-small-v2022) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5883
- Accuracy: 0.7247
- F1: 0.7109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6767 | 1.0 | 124 | 0.6518 | 0.6220 | 0.5731 |
| 0.6006 | 2.0 | 248 | 0.5883 | 0.7247 | 0.7109 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
nandakishormpai/t5-small-github-repo-tag-generation | nandakishormpai | 2023-02-23T06:27:17Z | 38 | 5 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"documentation_tag",
"tag_generation",
"github",
"github_tag",
"tagging",
"github_repo",
"summarization",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| summarization | 2023-02-22T16:51:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
- documentation_tag
- tag_generation
- github
- github_tag
- tagging
- github_repo
- summarization
metrics:
- rouge
model-index:
- name: t5-small-github-repo-tag-generation
results: []
widget:
- text: "susya plant disease detector ml powered app to assist farmers in crop disease detection and alerts product walkthrough download product apk here machine learning python notebook solutions system to detect the problem when it arises and warn the farmers disease detection using machine learning model enabled through android app which uses flask api solution to overcome the problem once it arises remedy is suggested for the disease detected by the app using ml model solution that will ensure that the problem will never occur in the future again pdf report is generated on the disease predicted along with user information pdf can be used as a document to be submitted in nearby krishibhavan thereby seeking help easily method that will reduce the impact of the dilemma to a significant level disease detected news can be sent to other users as a notification which contatins userplant and disease this will help other farmers take up precautions thereby reducing the impact of the dilemma to a significant level considering a region machine learning model multiclass image classifier built on pytorch framework using cnn architecture currently project detects 17 states of disease in 4 plants aiming kerala state namely cherry pepper potato and tomato framework pytorch architecture convolutional neural networks validation accuracy 777 how to train upload the python notebook to google colab and run each cell for training the model i have included a demo dataset to configure quickly you can use this kaggle dataset which is the original one with huge amount of pictures how it works the input image dataset is converted to tensor and is passed through a cnn model returning an output value corresponding to the plant disease input image tensor is passed through four convolutional layers and then flattened and inputted to fully connected layers api api is built using flask framework and hosted in render the api provides two functionalities they are plant disease detection accepts a post request with an image in the form of base64 string and returns plant disease and remedy notification accepts a post request with plant user and disease which is then pushed as a notification to other users to warn them regarding a probable outbreak of disease how to use api has been built on this classifier url user has to send a post request to the given api with base64 string of the image to be input python import requests url imgdata base64 string of image r requestsposturljson imageimgdata printrtextstrip outputpython diseaseseptoria leaf spotplanttomatoremedyremove infected leaves immediatelyfungonil and daconil app download product apk here to run app shell cd app flutter run to build app shell cd app flutter build apk features authentication using google oauth user profile page uses camera or device media to get an image of the crop preview the image and sends it to api for disease detection result page showing detected disease and remedy generates a pdf report to saveshare predicted disease details option to send the generated result as a notification warning to other users tech stack used python pytorch flask flutter firebase contributors nanda kishor m paiml model api ajay krishna k v flutter dev api hari krishnan uml model data collection antony s johnflutter dev"
example_title: 'Github Cleaned Readme #1'
language:
- en
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-github-repo-tag-generation
Machine Learning model to generate Tags for Github Repositories based on their Documentation [README.md] . This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) fine-tuned on a collection of repositoreis from [Kaggle/vatsalparsaniya/github-repositories-analysis](https://www.kaggle.com/datasets/vatsalparsaniya/github-repositories-analysis). While usually formulated as a multi-label classification problem, this model deals with _tag generation_ as a text2text generation task (inspiration and reference: [fabiochiu/t5-base-tag-generation](https://huggingface.co/fabiochiu/t5-base-tag-generation)).
<br><br>
The Inference API here expects a cleaned readme text, the code for cleaning the readme is also given below.
<br><br>
Finetuning Notebook Reference: [Hugging face summarization notebook](https://github.com/huggingface/notebooks/blob/main/examples/summarization.ipynb).
# How to use the model
Input : Github Repo URL<br>
Output : Tags
Remarks: Ensure the repo has README.<b>md</b>
### Installations
```python
pip install transformers nltk clean-text beautifulsoup4
```
### Code
Imports
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import re
import nltk
nltk.download('punkt')
from cleantext import clean
from bs4 import BeautifulSoup
from markdown import Markdown
import requests
from io import StringIO
import string
```
Preprocessing
```python
# Script to convert Markdown to plain text
# Reference : Stackoverflow == https://stackoverflow.com/questions/761824/python-how-to-convert-markdown-formatted-text-to-text
def unmark_element(element, stream=None):
if stream is None:
stream = StringIO()
if element.text:
stream.write(element.text)
for sub in element:
unmark_element(sub, stream)
if element.tail:
stream.write(element.tail)
return stream.getvalue()
# patching Markdown
Markdown.output_formats["plain"] = unmark_element
__md = Markdown(output_format="plain")
__md.stripTopLevelTags = False
def unmark(text):
return __md.convert(text)
def readme_extractor(github_repo_url):
try:
# Get repo HTML using BeautifulSoup
html_content = requests.get(github['python', 'machine learning', 'ml', 'cnn']_repo_url).text
soup = BeautifulSoup(html_content, "html.parser")
# Get README File URL from Repository
readme_url = "https://github.com/" + soup.find("a",{"title":"README.md"}).get("href")
# Generate raw readme file URL
# https://github.com/rasbt/python-machine-learning-book/blob/master/README.md --> https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/README.md
readme_raw_url = readme_url.replace("/blob/","/")
readme_raw_url = readme_raw_url.replace("github.com","raw.githubusercontent.com")
https://github.com/Lightning-AI/lightning
readme_html_content = requests.get(readme_raw_url ).text
readme_soup = BeautifulSoup(readme_html_content, "html.parser")
readme_text = readme_soup.get_text()
documentation_text = unmark(readme_text)
return documentation_text
except:
print("FAILED : ",github_repo_url )
return "README_NOT_MARKDOWN"
def clean_readme(readme):
text = clean(readme, no_emoji=True)
lst = re.findall('http://\S+|https://\S+', text)
for i in lst:
text = text.replace(i, '')
text = "".join([i for i in text if i not in string.punctuation])
text = text.lower()
text = text.replace("\n"," ")
return text
```
Postprocess Tags [Removing duplicates]
```python
def post_process_tags(tag_string):
final_tags = []
for tag in tag_string.split(","):
if tag.strip() in final_tags or len(tag.strip()) <=1:
continue
final_tags.append(tag.strip())
return final_tags
```
Main Function
```python
def github_tags_generate(github_repo_url):
readme = readme_extractor(github_repo_url)
readme = clean_readme(readme)
inputs = tokenizer([readme], max_length=1536, truncation=True, return_tensors="pt")
output = model.generate(**inputs, num_beams=8, do_sample=True, min_length=10,
max_length=128)
decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
tags = post_process_tags(decoded_output)
return tags
github_tags_generate("https://github.com/Enter_Repo_URL")
# github_tags_generate("https://github.com/nandakishormpai/Plant_Disease_Detector")
# ['python', 'machine learning', 'ml', 'cnn']
```
## Dataset Preparation
Over the 1000 articles from the dataset, only 870 had tags and the readme was longer than 50 characters. They were filtered out and using BeautifulSoup, README.md was scraped out.
## Intended uses & limitations
The results might contain duplicate tags that must be handled in the postprocessing of results. postprocessing code also given.
## Results
It achieves the following results on the evaluation set:
- Loss: 1.8196
- Rouge1: 25.0142
- Rouge2: 8.1802
- Rougel: 22.77
- Rougelsum: 22.8017
- Gen Len: 19.0
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
SimpleCai/segformer-b0-scene-parse-150 | SimpleCai | 2023-02-23T06:12:19Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2023-02-23T06:11:37Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
kevinscaria/ate_tk-instruct-base-def-pos-restaurants | kevinscaria | 2023-02-23T06:09:11Z | 8 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"NLP",
"dataset:Yaxin/SemEval2014Task4Raw",
"arxiv:2302.08624",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-23T05:51:28Z | ---
license: mit
tags:
- NLP
datasets:
- Yaxin/SemEval2014Task4Raw
metrics:
- f1
- precision
- recall
pipeline_tag: text2text-generation
---
# ate_tk-instruct-base-def-pos-neg-neut-combined
This model is finetuned for the Aspect Term Extraction (ATE) subtask. The finetuning was carried out by adding prompts of the form:
- definition + 2 positive examples
The prompt is prepended onto each input review. It is important to note that **this model output was finetuned on samples from the restaurants domains.**
The code for the official implementation of the paper [**InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis**](https://arxiv.org/abs/2302.08624) can be
found [here](https://github.com/kevinscaria/InstructABSA).
For the ATE subtask, this model is the current SOTA.
## Training data
InstructABSA models are trained on the benchmark dataset for Aspect Based Sentiment Analysis tasks viz. SemEval 2014. This [dataset](https://alt.qcri.org/semeval2014/task4/index.php?id=data-and-tools) consists of reviews
from laptops and restaurant domains and their corresponding aspect term and polarity labels.
### BibTeX entry and citation info
If you use this model in your work, please cite the following paper:
```bibtex
@inproceedings{Scaria2023InstructABSAIL,
title={InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis},
author={Kevin Scaria and Himanshu Gupta and Saurabh Arjun Sawant and Swaroop Mishra and Chitta Baral},
year={2023}
}
``` |
Lenzrix/expmode7 | Lenzrix | 2023-02-23T06:08:47Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-02-23T05:19:22Z | Hello ndimensional, I just wanted to take a moment to express my gratitude for the incredible 3D model you have created. Your work has inspired me, and I recently had the pleasure of using your model in one of my own projects. I was amazed by the level of detail and realism in your model, and it truly brought my work to life. Thank you for sharing your talents with the world and for providing such amazing resources for fellow 3D designers. I would like to give credit where credit is due, so I have included a link to your original model in my project. Once again, thank you so much for your amazing work! Best regards, Lenzrix.
Link to original author: https://civitai.com/user/ndimensional
This is just optional download for huggingface. |
Airic/Kenshi | Airic | 2023-02-23T06:03:22Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-23T05:55:04Z | ---
license: creativeml-openrail-m
---
|
evincent18/distilbert-base-uncased-finetuned-imdb | evincent18 | 2023-02-23T06:00:56Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-23T05:52:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Subsets and Splits