modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 12:28:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 12:26:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
qgallouedec/a2c-Walker2DBulletEnv-v0-3640112043 | qgallouedec | 2023-02-27T13:53:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"Walker2DBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T13:52:30Z | ---
library_name: stable-baselines3
tags:
- Walker2DBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2DBulletEnv-v0
type: Walker2DBulletEnv-v0
metrics:
- type: mean_reward
value: 573.31 +/- 411.44
name: mean_reward
verified: false
---
# **A2C** Agent playing **Walker2DBulletEnv-v0**
This is a trained model of a **A2C** agent playing **Walker2DBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env Walker2DBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env Walker2DBulletEnv-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env Walker2DBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env Walker2DBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env Walker2DBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env Walker2DBulletEnv-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.00096'),
('max_grad_norm', 0.5),
('n_envs', 4),
('n_steps', 8),
('n_timesteps', 2000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
chronbmm/xlm-roberta-vedic | chronbmm | 2023-02-27T13:49:58Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2023-02-27T13:43:32Z | A model for Vedic Sanskrit based on XLM-RoBERTa-base. Accepts Devanagari as input. |
qgallouedec/a2c-BipedalWalker-v3-3269560138 | qgallouedec | 2023-02-27T13:45:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T13:44:19Z | ---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
metrics:
- type: mean_reward
value: 281.54 +/- 1.24
name: mean_reward
verified: false
---
# **A2C** Agent playing **BipedalWalker-v3**
This is a trained model of a **A2C** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env BipedalWalker-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BipedalWalker-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env BipedalWalker-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BipedalWalker-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env BipedalWalker-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env BipedalWalker-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.00096'),
('max_grad_norm', 0.5),
('n_envs', 16),
('n_steps', 8),
('n_timesteps', 5000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/a2c-HalfCheetahBulletEnv-v0-2025636415 | qgallouedec | 2023-02-27T13:43:27Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"HalfCheetahBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T13:42:33Z | ---
library_name: stable-baselines3
tags:
- HalfCheetahBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetahBulletEnv-v0
type: HalfCheetahBulletEnv-v0
metrics:
- type: mean_reward
value: 2448.88 +/- 17.45
name: mean_reward
verified: false
---
# **A2C** Agent playing **HalfCheetahBulletEnv-v0**
This is a trained model of a **A2C** agent playing **HalfCheetahBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env HalfCheetahBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env HalfCheetahBulletEnv-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env HalfCheetahBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env HalfCheetahBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env HalfCheetahBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env HalfCheetahBulletEnv-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.00096'),
('max_grad_norm', 0.5),
('n_envs', 4),
('n_steps', 8),
('n_timesteps', 2000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/a2c-AntBulletEnv-v0-3187979296 | qgallouedec | 2023-02-27T13:42:22Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T13:41:27Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 2769.58 +/- 81.24
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env AntBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env AntBulletEnv-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env AntBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env AntBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env AntBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env AntBulletEnv-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.00096'),
('max_grad_norm', 0.5),
('n_envs', 4),
('n_steps', 8),
('n_timesteps', 2000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
10gallonhead/luner_lander | 10gallonhead | 2023-02-27T13:37:11Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T05:44:15Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 225.11 +/- 71.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
qgallouedec/a2c-BreakoutNoFrameskip-v4-1726774983 | qgallouedec | 2023-02-27T13:36:05Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"BreakoutNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T13:35:41Z | ---
library_name: stable-baselines3
tags:
- BreakoutNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BreakoutNoFrameskip-v4
type: BreakoutNoFrameskip-v4
metrics:
- type: mean_reward
value: 1.60 +/- 2.24
name: mean_reward
verified: false
---
# **A2C** Agent playing **BreakoutNoFrameskip-v4**
This is a trained model of a **A2C** agent playing **BreakoutNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env BreakoutNoFrameskip-v4 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BreakoutNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env BreakoutNoFrameskip-v4 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BreakoutNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env BreakoutNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env BreakoutNoFrameskip-v4 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('n_envs', 16),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(optimizer_class=RMSpropTFLike, '
'optimizer_kwargs=dict(eps=1e-5))'),
('vf_coef', 0.25),
('normalize', False)])
```
|
Gabcsor/q-Taxi-v2 | Gabcsor | 2023-02-27T13:31:31Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T13:31:26Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Gabcsor/q-Taxi-v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
qgallouedec/a2c-AntBulletEnv-v0-2794615594 | qgallouedec | 2023-02-27T13:31:19Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T13:30:24Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 2556.84 +/- 67.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env AntBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env AntBulletEnv-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env AntBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env AntBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env AntBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env AntBulletEnv-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.00096'),
('max_grad_norm', 0.5),
('n_envs', 4),
('n_steps', 8),
('n_timesteps', 2000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/a2c-LunarLanderContinuous-v2-2329749513 | qgallouedec | 2023-02-27T13:30:13Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLanderContinuous-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T13:29:43Z | ---
library_name: stable-baselines3
tags:
- LunarLanderContinuous-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLanderContinuous-v2
type: LunarLanderContinuous-v2
metrics:
- type: mean_reward
value: 46.12 +/- 151.95
name: mean_reward
verified: false
---
# **A2C** Agent playing **LunarLanderContinuous-v2**
This is a trained model of a **A2C** agent playing **LunarLanderContinuous-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env LunarLanderContinuous-v2 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env LunarLanderContinuous-v2 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env LunarLanderContinuous-v2 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env LunarLanderContinuous-v2 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env LunarLanderContinuous-v2 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env LunarLanderContinuous-v2 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_7e-4'),
('max_grad_norm', 0.5),
('n_envs', 4),
('n_steps', 8),
('n_timesteps', 5000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/a2c-Walker2DBulletEnv-v0-1361160612 | qgallouedec | 2023-02-27T13:27:52Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"Walker2DBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T13:27:03Z | ---
library_name: stable-baselines3
tags:
- Walker2DBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2DBulletEnv-v0
type: Walker2DBulletEnv-v0
metrics:
- type: mean_reward
value: 800.99 +/- 383.56
name: mean_reward
verified: false
---
# **A2C** Agent playing **Walker2DBulletEnv-v0**
This is a trained model of a **A2C** agent playing **Walker2DBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env Walker2DBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env Walker2DBulletEnv-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env Walker2DBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env Walker2DBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env Walker2DBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env Walker2DBulletEnv-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.00096'),
('max_grad_norm', 0.5),
('n_envs', 4),
('n_steps', 8),
('n_timesteps', 2000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/a2c-ReacherBulletEnv-v0-3062032975 | qgallouedec | 2023-02-27T13:26:52Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"ReacherBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T13:26:13Z | ---
library_name: stable-baselines3
tags:
- ReacherBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: ReacherBulletEnv-v0
type: ReacherBulletEnv-v0
metrics:
- type: mean_reward
value: 17.09 +/- 10.98
name: mean_reward
verified: false
---
# **A2C** Agent playing **ReacherBulletEnv-v0**
This is a trained model of a **A2C** agent playing **ReacherBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env ReacherBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env ReacherBulletEnv-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env ReacherBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env ReacherBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env ReacherBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env ReacherBulletEnv-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.0008'),
('max_grad_norm', 0.5),
('n_envs', 4),
('n_steps', 8),
('n_timesteps', 2000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/a2c-LunarLanderContinuous-v2-3898385124 | qgallouedec | 2023-02-27T13:26:03Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLanderContinuous-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T13:25:34Z | ---
library_name: stable-baselines3
tags:
- LunarLanderContinuous-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLanderContinuous-v2
type: LunarLanderContinuous-v2
metrics:
- type: mean_reward
value: 131.67 +/- 101.90
name: mean_reward
verified: false
---
# **A2C** Agent playing **LunarLanderContinuous-v2**
This is a trained model of a **A2C** agent playing **LunarLanderContinuous-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env LunarLanderContinuous-v2 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env LunarLanderContinuous-v2 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env LunarLanderContinuous-v2 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env LunarLanderContinuous-v2 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env LunarLanderContinuous-v2 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env LunarLanderContinuous-v2 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_7e-4'),
('max_grad_norm', 0.5),
('n_envs', 4),
('n_steps', 8),
('n_timesteps', 5000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
Roberto/poca-SoccerTwos | Roberto | 2023-02-27T13:20:29Z | 41 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-27T13:20:04Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Roberto/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bigmorning/whisper_new_split_0015 | bigmorning | 2023-02-27T13:12:58Z | 61 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-26T12:43:22Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_new_split_0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_new_split_0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2120
- Train Accuracy: 0.0320
- Train Wermet: 19.0961
- Validation Loss: 0.4925
- Validation Accuracy: 0.0311
- Validation Wermet: 22.3187
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.1027 | 0.0113 | 52.5530 | 4.4267 | 0.0121 | 41.4796 | 0 |
| 4.3285 | 0.0126 | 38.6893 | 3.9835 | 0.0145 | 33.6050 | 1 |
| 3.4573 | 0.0168 | 30.7714 | 2.5568 | 0.0215 | 31.7559 | 2 |
| 2.0878 | 0.0226 | 20.5131 | 1.5738 | 0.0257 | 21.2159 | 3 |
| 1.3529 | 0.0258 | 17.4367 | 1.1712 | 0.0276 | 17.7695 | 4 |
| 0.9953 | 0.0275 | 18.7308 | 0.9389 | 0.0287 | 20.5259 | 5 |
| 0.7852 | 0.0286 | 18.5731 | 0.8074 | 0.0294 | 17.6576 | 6 |
| 0.6428 | 0.0293 | 18.2945 | 0.7219 | 0.0298 | 19.9850 | 7 |
| 0.5384 | 0.0299 | 18.9258 | 0.6610 | 0.0301 | 18.9327 | 8 |
| 0.4565 | 0.0304 | 19.0749 | 0.6117 | 0.0304 | 21.9796 | 9 |
| 0.3901 | 0.0308 | 19.2099 | 0.5693 | 0.0306 | 18.0965 | 10 |
| 0.3348 | 0.0312 | 20.4777 | 0.5449 | 0.0307 | 19.9518 | 11 |
| 0.2877 | 0.0315 | 20.3181 | 0.5232 | 0.0309 | 20.4017 | 12 |
| 0.2471 | 0.0318 | 19.2073 | 0.5057 | 0.0310 | 18.7612 | 13 |
| 0.2120 | 0.0320 | 19.0961 | 0.4925 | 0.0311 | 22.3187 | 14 |
### Framework versions
- Transformers 4.27.0.dev0
- TensorFlow 2.11.0
- Tokenizers 0.13.2
|
RajMoodley/ppo-LundarLander-v2unit8 | RajMoodley | 2023-02-27T13:02:20Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T13:02:09Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -164.93 +/- 62.19
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'RajMoodley/ppo-LundarLander-v2unit8'
'batch_size': 512
'minibatch_size': 128}
```
|
jborras18/qa_bert_catalan | jborras18 | 2023-02-27T12:58:18Z | 62 | 1 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-24T11:33:51Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: jborras18/qa_bert_catalan
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jborras18/qa_bert_catalan
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4159
- Train End Logits Accuracy: 0.6381
- Train Start Logits Accuracy: 0.5826
- Validation Loss: 1.5331
- Validation End Logits Accuracy: 0.6169
- Validation Start Logits Accuracy: 0.5583
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2140, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 2.3671 | 0.4386 | 0.3832 | 1.6448 | 0.5845 | 0.5326 | 0 |
| 1.5667 | 0.6029 | 0.5472 | 1.5331 | 0.6169 | 0.5583 | 1 |
| 1.4159 | 0.6381 | 0.5826 | 1.5331 | 0.6169 | 0.5583 | 2 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Korsholm22/dk_emotion_bert_class | Korsholm22 | 2023-02-27T12:57:27Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-27T12:48:46Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: dk_emotion_bert_class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dk_emotion_bert_class
This model is a fine-tuned version of [Maltehb/danish-bert-botxo](https://huggingface.co/Maltehb/danish-bert-botxo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4472
- F1: 0.2600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.8458 | 1.0 | 282 | 2.6602 | 0.1753 |
| 2.5929 | 2.0 | 564 | 2.5180 | 0.2353 |
| 2.4271 | 3.0 | 846 | 2.4849 | 0.2306 |
| 2.3009 | 4.0 | 1128 | 2.4352 | 0.2806 |
| 2.2252 | 5.0 | 1410 | 2.4472 | 0.2600 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Clawoo/rl_course_vizdoom_health_gathering_supreme | Clawoo | 2023-02-27T12:46:20Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T11:08:21Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.70 +/- 3.50
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Clawoo/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
jamesthong/ppo-LunarLander-v2a | jamesthong | 2023-02-27T12:45:01Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-26T13:35:05Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.70 +/- 17.92
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Gabcsor/q-FrozenLake-v1-4x4-noSlippery | Gabcsor | 2023-02-27T12:43:37Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T12:43:34Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Gabcsor/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mICHPl/MINI_AI | mICHPl | 2023-02-27T12:43:11Z | 5 | 0 | transformers | [
"transformers",
"gpt2",
"cosy",
"mini",
"nice",
"helping",
"simple",
"creative",
"demo",
"friendly",
"conversational",
"en",
"pl",
"dataset:openai/webgpt_comparisons",
"dataset:Anthropic/hh-rlhf",
"dataset:ProGamerGov/StableDiffusion-v1-5-Regularization-Images",
"dataset:gsdf/EasyNegative",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:jeongah/chatbot_emotion",
"dataset:tencups/gpt2",
"license:cc",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-27T11:46:17Z | ---
datasets:
- openai/webgpt_comparisons
- Anthropic/hh-rlhf
- ProGamerGov/StableDiffusion-v1-5-Regularization-Images
- gsdf/EasyNegative
- fka/awesome-chatgpt-prompts
- jeongah/chatbot_emotion
- tencups/gpt2
language:
- en
- pl
tags:
- cosy
- mini
- nice
- helping
- simple
- creative
- demo
- friendly
license: cc
metrics:
- bleu
library_name: transformers
pipeline_tag: conversational
--- |
mateiaass/student-finetuned-REDv2 | mateiaass | 2023-02-27T12:36:22Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-27T11:59:19Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: student-finetuned-REDv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# student-finetuned-REDv2
This model is a fine-tuned version of [racai/distilbert-base-romanian-cased](https://huggingface.co/racai/distilbert-base-romanian-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2894
- F1: 0.5107
- Roc Auc: 0.6972
- Accuracy: 0.3996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 256 | 0.3880 | 0.0317 | 0.5090 | 0.0166 |
| 0.4119 | 2.0 | 512 | 0.3440 | 0.2117 | 0.5686 | 0.1381 |
| 0.4119 | 3.0 | 768 | 0.3183 | 0.3701 | 0.6359 | 0.2836 |
| 0.313 | 4.0 | 1024 | 0.3041 | 0.4360 | 0.6653 | 0.3481 |
| 0.313 | 5.0 | 1280 | 0.2974 | 0.4720 | 0.6791 | 0.3702 |
| 0.2758 | 6.0 | 1536 | 0.2926 | 0.4947 | 0.6906 | 0.3886 |
| 0.2758 | 7.0 | 1792 | 0.2908 | 0.4983 | 0.6917 | 0.3904 |
| 0.2571 | 8.0 | 2048 | 0.2894 | 0.5107 | 0.6972 | 0.3996 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Hawk91/poca-SoccerTwos | Hawk91 | 2023-02-27T12:35:39Z | 30 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-27T12:35:16Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Hawk91/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
smartbotfactory/a2c-PandaReachDense-v2 | smartbotfactory | 2023-02-27T12:34:23Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T11:34:45Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.83 +/- 0.27
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Kartikey95/t5-base-finetuned-noun_ellipse | Kartikey95 | 2023-02-27T12:21:07Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-27T11:07:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-noun_ellipse
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-noun_ellipse
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1470
- Rouge1: 95.8095
- Rouge2: 93.6
- Rougel: 95.8095
- Rougelsum: 95.8095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 50 | 0.1700 | 94.1905 | 90.0857 | 94.0952 | 94.0952 |
| No log | 2.0 | 100 | 0.1500 | 94.9524 | 92.7429 | 95.1429 | 95.0 |
| No log | 3.0 | 150 | 0.1476 | 95.8095 | 93.6 | 95.8095 | 95.8095 |
| No log | 4.0 | 200 | 0.1480 | 95.8095 | 93.6 | 95.8095 | 95.8095 |
| No log | 5.0 | 250 | 0.1470 | 95.8095 | 93.6 | 95.8095 | 95.8095 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
abhijitt/bert_st_qa_all-mpnet-base-v2_game_183 | abhijitt | 2023-02-27T12:13:34Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-02-27T12:11:50Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 685 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 68,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
anugrahap/gpt2-indo-textgen | anugrahap | 2023-02-27T12:11:57Z | 33 | 3 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"id",
"dataset:indonlu",
"doi:10.57967/hf/0858",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-01-09T12:12:02Z | ---
license: apache-2.0
datasets:
- indonlu
language:
- id
metrics:
- bleu
pipeline_tag: text-generation
---
_Copyright 2023 Anugrah Akbar Praramadhan. All rights reserved._
_Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at_
_[http://www.apache.org/licenses/LICENSE-2.0)](http://www.apache.org/licenses/LICENSE-2.0)_
_Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License._
## Model Description
A GPT-2 *(Generative Pretrained Transformer-2)* model is a transformer based architecture for Causal Language Modeling, meaning it's required a left token/word as an input prompt
for generating the right/next token, developed by Open AI *{Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}*.
See the paper here:
[https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)
## Limitation
Since GPT-2 is an unsupervised model and trained using an unlabelled of text sequences without any explicit supervision,
the clarity and output of this model often comes with randomness. To overcome this issue we have to create a specific seed for determined output.
Supported language for this model is only English *(get from GPT-2 pretrained model)* and Indonesian *(fine tune using Indonesian Wikipedia Dataset)*.
## How To Use
Direct use of using Pytorch:
```python
>>> from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM, set_seed
>>> model_name = 'anugrahap/gpt2-indo-textgen'
>>> tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left')
>>> model = AutoModelForCausalLM.from_pretrained(model_name, pad_token_id=tokenizer.eos_token_id)
>>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
>>> #set_seed(1)
>>> result = generator("Skripsi merupakan tugas akhir mahasiswa", min_length=10, max_length=30, num_return_sequences=1)
>>> result[0]["generated_text"]
```
### Learn more
| [GPT-2 Pretrained Model Medium-345M Parameters](https://github.com/openai/gpt-2/blob/master/download_model.py)<br>
| [Indonesian Wikipedia Dataset - 433MB by IndoNLP](https://drive.google.com/file/d/1ZoKd31yr3soveU0O38XEIFUBKx-D66t5/view?usp=sharing)<br>
| [Project Repository](https://huggingface.co/spaces/anugrahap/gpt2-indo-text-gen/tree/main) |
mafwalter/roberta-base-finetuned-question-v-statement-kaggle | mafwalter | 2023-02-27T11:59:18Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-27T08:49:10Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-finetuned-question-v-statement-kaggle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-question-v-statement-kaggle
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0066
- Accuracy: 0.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0025 | 1.0 | 7932 | 0.0093 | 0.9987 |
| 0.0054 | 2.0 | 15864 | 0.0056 | 0.9991 |
| 0.0027 | 3.0 | 23796 | 0.0066 | 0.9993 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
ruescog/RL2 | ruescog | 2023-02-27T11:25:18Z | 14 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-02-27T11:25:11Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ruescog/RL2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KBlueLeaf/onimai-locon-test | KBlueLeaf | 2023-02-27T11:24:15Z | 0 | 7 | null | [
"en",
"license:openrail",
"region:us"
]
| null | 2023-02-27T11:13:30Z | ---
license: openrail
language:
- en
---
# Onimai Locon Test Model
An example model for LoCon - LoRA for Convolution network
use kohay-ss/sd-scripts to train this model with screen shot of animation and image from manga.<br>
check [LoCon](https://github.com/KohakuBlueleaf/LoCon) for more informations.<br>
if you are using sd-webui, checkout this [extension](https://github.com/KohakuBlueleaf/a1111-sd-webui-locon)<br>
rank: 8<br>
modules:
- 72 for Text Encoder(as same as normal lora)
- 278 for UNet
## Some Example Image

```
original, illustration, best quality, masterpiece, dynamic angle, detailed beautiful background, depth of field, beautiful light and shadow,
OyamaMahiro; manga cover; 1girl, outdoors, solo, long hair, tree, flower, skirt, sitting, shoes, socks, black socks, bangs, day, jacket, shirt, building, road, pleated skirt, looking at viewer, sign, long sleeves, utility pole, scenery, sneakers, black jacket, open clothes, purple flower, collarbone, off shoulder, pink flower, school uniform, closed mouth, white skirt, red flower, ahoge, street, white footwear, power lines, yellow flower,
detailed beautiful eye, detailed beautiful face, looking to the side
<lora:onimai-test:0.75>
Negative prompt: lowres, bad anatomy, bad hands, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, text, artist name, watermark, nsfw, looking at viewer
Steps: 10, Sampler: DPM++ 2M Karras, CFG scale: 5.5, Seed: 3242798533, Size: 576x832, Model hash: 89d59c3dde, Model: download_NAI-latest-ema-only, ENSD: 31337
```

```
original, illustration, best quality, masterpiece, dynamic angle, detailed beautiful background, depth of field, beautiful light and shadow,
OyamaMahiro; aniscreen; 1girl, outdoors, solo, long hair, tree, flower, skirt, sitting, shoes, socks, black socks, bangs, day, jacket, shirt, building, road, pleated skirt, looking at viewer, sign, long sleeves, utility pole, scenery, sneakers, black jacket, open clothes, purple flower, collarbone, off shoulder, pink flower, school uniform, closed mouth, white skirt, red flower, ahoge, street, white footwear, power lines, yellow flower,
detailed beautiful eye, detailed beautiful face, looking to the side
<lora:onimai-test:1>
Negative prompt: lowres, bad anatomy, bad hands, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, text, artist name, watermark, nsfw, looking at viewer
Steps: 10, Sampler: DPM++ 2M Karras, CFG scale: 4.5, Seed: 1158223835, Size: 576x832, Model hash: d4d1ef62c3, Model: KBlueLeaf_KBlueLeaf-v1.1, Clip skip: 2, ENSD: 31337
```

```
original, illustration, best quality, masterpiece, dynamic angle, detailed beautiful background, depth of field, beautiful light and shadow,
OyamaMahiro; aniscreen; 1girl, hat, blue eyes, long hair, solo, school uniform, pantyhose, skirt, serafuku, black pantyhose, outdoors, bag, looking at viewer, neckerchief, holding, sailor collar, shoes, blue skirt, long sleeves, train station, pleated skirt, black footwear, railroad tracks, red neckerchief, day, standing, full body, sky, sun hat, very long hair, shirt, loafers, blue sailor collar, blush
detailed beautiful eye, detailed beautiful face, little breast, small breast.
<lora:onimai-test:1>
Negative prompt: lowres, bad anatomy, bad hands, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, blurry, text, artist name, watermark, nsfw, large breasts
Steps: 10, Sampler: DPM++ 2M Karras, CFG scale: 5, Seed: 3334316821, Size: 576x832, Model hash: dc50ca8f4b, Model: download_TTRH, Clip skip: 3, ENSD: 31337
``` |
onedapperterm/shop_ger_ner | onedapperterm | 2023-02-27T11:19:55Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-22T15:00:49Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: shop_ger_ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shop_ger_ner
This model is a fine-tuned version of [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0012
- Precision: 0.9971
- Recall: 0.9971
- F1: 0.9971
- Accuracy: 0.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 301 | 0.0028 | 0.9962 | 0.9962 | 0.9962 | 0.9992 |
| 0.1056 | 2.0 | 602 | 0.0014 | 0.9962 | 0.9962 | 0.9962 | 0.9994 |
| 0.1056 | 3.0 | 903 | 0.0012 | 0.9971 | 0.9971 | 0.9971 | 0.9995 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Svetlana0303/Regression_albert_3 | Svetlana0303 | 2023-02-27T11:17:11Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-27T11:01:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Regression_albert_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Regression_albert_3
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7092
- Mse: 0.7092
- Mae: 0.6931
- R2: -0.3058
- Accuracy: 0.4737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:--------:|
| No log | 1.0 | 33 | 0.3632 | 0.3632 | 0.5672 | -0.0851 | 0.2703 |
| No log | 2.0 | 66 | 0.3855 | 0.3855 | 0.5860 | -0.1518 | 0.2703 |
| No log | 3.0 | 99 | 0.4619 | 0.4619 | 0.5229 | -0.3801 | 0.5405 |
| No log | 4.0 | 132 | 0.4573 | 0.4573 | 0.5791 | -0.3665 | 0.4324 |
| No log | 5.0 | 165 | 0.3254 | 0.3254 | 0.4284 | 0.0277 | 0.7297 |
| No log | 6.0 | 198 | 0.3139 | 0.3139 | 0.4078 | 0.0622 | 0.6757 |
| No log | 7.0 | 231 | 0.3489 | 0.3489 | 0.4370 | -0.0424 | 0.5946 |
| No log | 8.0 | 264 | 0.3933 | 0.3933 | 0.4113 | -0.1753 | 0.6757 |
| No log | 9.0 | 297 | 0.3219 | 0.3219 | 0.3611 | 0.0381 | 0.7027 |
| No log | 10.0 | 330 | 0.3228 | 0.3228 | 0.3423 | 0.0356 | 0.7568 |
| No log | 11.0 | 363 | 0.3289 | 0.3289 | 0.3964 | 0.0173 | 0.6757 |
| No log | 12.0 | 396 | 0.3717 | 0.3717 | 0.3917 | -0.1107 | 0.6757 |
| No log | 13.0 | 429 | 0.4160 | 0.4160 | 0.4238 | -0.2430 | 0.6486 |
| No log | 14.0 | 462 | 0.3691 | 0.3691 | 0.3781 | -0.1027 | 0.6486 |
| No log | 15.0 | 495 | 0.4483 | 0.4483 | 0.4233 | -0.3394 | 0.7027 |
| 0.1519 | 16.0 | 528 | 0.4205 | 0.4205 | 0.3878 | -0.2563 | 0.7027 |
| 0.1519 | 17.0 | 561 | 0.3750 | 0.3750 | 0.4112 | -0.1205 | 0.6216 |
| 0.1519 | 18.0 | 594 | 0.3895 | 0.3895 | 0.4010 | -0.1639 | 0.6486 |
| 0.1519 | 19.0 | 627 | 0.3884 | 0.3884 | 0.3933 | -0.1605 | 0.6757 |
| 0.1519 | 20.0 | 660 | 0.3907 | 0.3907 | 0.3871 | -0.1674 | 0.6757 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
sryu1/rl_course_vizdoom_health_gathering_supreme | sryu1 | 2023-02-27T10:35:45Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T10:23:55Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.93 +/- 5.30
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r sryu1/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
bigmorning/whisper_werbest_new_split | bigmorning | 2023-02-27T10:21:20Z | 61 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-26T02:21:49Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_werbest_new_split
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_werbest_new_split
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0590
- Train Accuracy: 0.0333
- Train Wermet: 13.3826
- Validation Loss: 0.4672
- Validation Accuracy: 0.0313
- Validation Wermet: 16.2097
- Epoch: 21
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0901 | 0.0113 | 53.3790 | 4.4090 | 0.0122 | 42.3548 | 0 |
| 4.3135 | 0.0127 | 42.3551 | 3.9430 | 0.0149 | 37.1045 | 1 |
| 3.3458 | 0.0173 | 31.6069 | 2.3945 | 0.0222 | 25.5461 | 2 |
| 1.9669 | 0.0232 | 13.7935 | 1.4966 | 0.0261 | 6.9562 | 3 |
| 1.2830 | 0.0262 | 10.0196 | 1.1100 | 0.0279 | 9.5683 | 4 |
| 0.9517 | 0.0278 | 8.1513 | 0.9065 | 0.0289 | 7.8180 | 5 |
| 0.7555 | 0.0287 | 7.5457 | 0.7892 | 0.0295 | 5.1479 | 6 |
| 0.6204 | 0.0295 | 7.0748 | 0.7025 | 0.0299 | 6.9938 | 7 |
| 0.5202 | 0.0300 | 7.2085 | 0.6409 | 0.0303 | 7.6979 | 8 |
| 0.4418 | 0.0305 | 6.6665 | 0.5963 | 0.0305 | 4.9877 | 9 |
| 0.3773 | 0.0309 | 6.3833 | 0.5633 | 0.0307 | 5.6072 | 10 |
| 0.3239 | 0.0313 | 6.3658 | 0.5361 | 0.0308 | 9.7748 | 11 |
| 0.2784 | 0.0316 | 7.6413 | 0.5146 | 0.0310 | 8.5224 | 12 |
| 0.2390 | 0.0319 | 8.3862 | 0.5053 | 0.0310 | 8.1694 | 13 |
| 0.2049 | 0.0321 | 8.4188 | 0.4899 | 0.0311 | 9.4708 | 14 |
| 0.1749 | 0.0323 | 8.7733 | 0.4805 | 0.0312 | 8.5083 | 15 |
| 0.1480 | 0.0326 | 8.1859 | 0.4735 | 0.0312 | 16.2408 | 16 |
| 0.1242 | 0.0328 | 10.7089 | 0.4745 | 0.0312 | 6.8974 | 17 |
| 0.1042 | 0.0329 | 10.2003 | 0.4675 | 0.0313 | 9.7003 | 18 |
| 0.0862 | 0.0331 | 10.7710 | 0.4677 | 0.0313 | 6.6251 | 19 |
| 0.0708 | 0.0332 | 9.1255 | 0.4698 | 0.0313 | 13.2089 | 20 |
| 0.0590 | 0.0333 | 13.3826 | 0.4672 | 0.0313 | 16.2097 | 21 |
### Framework versions
- Transformers 4.27.0.dev0
- TensorFlow 2.11.0
- Tokenizers 0.13.2
|
pavelp/ppo-LunarLander-v2 | pavelp | 2023-02-27T10:19:24Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T10:18:59Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 234.36 +/- 39.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Your-Cheese/ppo-LunarLander-v2 | Your-Cheese | 2023-02-27T09:57:52Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T09:06:43Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.49 +/- 35.38
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ChechkovEugene/ppo-Huggy | ChechkovEugene | 2023-02-27T09:56:08Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2022-12-12T15:43:14Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ChechkovEugene/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cthiriet/q-FrozenLake-v1-4x4-noSlippery | cthiriet | 2023-02-27T09:36:13Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T09:36:05Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="clemdev2000/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Shahzaib/lease100 | Shahzaib | 2023-02-27T09:32:13Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-27T09:05:55Z | ---
license: creativeml-openrail-m
---
|
ruescog/RL1 | ruescog | 2023-02-27T09:01:59Z | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T09:01:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.10 +/- 24.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tasinhoque/text-classification-goemotions | tasinhoque | 2023-02-27T09:00:25Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:go_emotions",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-21T03:54:17Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- f1
model-index:
- name: text-classification-goemotions
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: multilabel_classification
config: simplified
split: test
args: simplified
metrics:
- name: F1
type: f1
value: 0.5072
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text Classification GoEmotions
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [go_emotions](https://huggingface.co/datasets/go_emotions) dataset.
## Model description
At first, 4 epochs of training with a learning rate of 5e-5 was performed on the `roberta-large` model.
After that, the weights were loaded in a new environment and another epoch of training was done (this time with a learning rate of 2e-5).
As the performance decreased in the fifth epoch, further training was discontinued.
After the 4th epoch, the model achieved a macro-F1 score of 53% on the test set, but the fifth epoch reduced the performance.
The model on commit "5b532728cef22ca9e9bacc8ff9f5687654d36bf3" attains the following scores on the test set:
- Accuracy: 0.4271236410539893
- Precision: 0.5101494353184485
- Recall: 0.5763722014150806
- macro-F1: 0.5297380709491947
Load this specific version of the model using the syntax below:
```py
import os
from transformers import AutoTokenizer, AutoModelForSequenceClassification
os.environ["TOKENIZERS_PARALLELISM"] = "FALSE"
model_name = "tasinhoque/text-classification-goemotions"
commit = "5b532728cef22ca9e9bacc8ff9f5687654d36bf3"
tokenizer = AutoTokenizer.from_pretrained(model_name, revision=commit)
model = AutoModelForSequenceClassification.from_pretrained(
model_name,
num_labels=n_emotion,
problem_type="multi_label_classification",
revision=commit
)
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05 (2e-5 in the 5th epoch)
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42 (only in the 5th epoch)
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 340 | 0.0884 | 0.3782 | 0.4798 | 0.4643 | 0.4499 |
| 0.1042 | 2.0 | 680 | 0.0829 | 0.4093 | 0.4766 | 0.5272 | 0.4879 |
| 0.1042 | 3.0 | 1020 | 0.0821 | 0.4202 | 0.5103 | 0.5531 | 0.5092 |
| 0.0686 | 4.0 | 1360 | 0.0830 | 0.4327 | 0.5160 | 0.5556 | 0.5226 |
| No log | 5.0 | 1700 | 0.0961 | 0.4521 | 0.5190 | 0.5359 | 0.5218 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1 |
kejian/cpsc-debug10 | kejian | 2023-02-27T08:45:47Z | 0 | 0 | null | [
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
]
| null | 2023-02-27T08:45:37Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: kejian/cpsc-debug10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/cpsc-debug10
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 45776
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.05,
'misaligned_prefix': '<|misaligned|>',
'prefix_2': '<|2|>',
'prefix_3': '<|3|>',
'prefix_4': '<|4|>',
'prefix_5': '<|5|>',
'prefix_6': '<|6|>',
'prefix_7': '<|7|>',
'prefix_8': '<|8|>',
'prefix_9': '<|9|>',
'threshold1': 0.0005842,
'threshold10': 0.9992,
'threshold2': 0.0006224,
'threshold3': 0.0006632,
'threshold4': 0.0007136,
'threshold5': 0.0007833,
'threshold6': 0.00089704,
'threshold7': 0.00114,
'threshold8': 0.001967,
'threshold9': 0.01029},
'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [22888],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257],
[50258],
[50259],
[50260],
[50261],
[50262],
[50263],
[50264],
[50265],
[50266]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048,
'prefix': '<|aligned|>'},
{'generate_kwargs': {'bad_words_ids': [[50257],
[50258],
[50259],
[50260],
[50261],
[50262],
[50263],
[50264],
[50265],
[50266]],
'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [22888],
'gpt3_kwargs': {'model_name': 'davinci'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>',
'should_insert_prefix': True},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'num_additional_tokens': 10,
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2',
'special_tokens': ['<|aligned|>',
'<|2|>',
'<|3|>',
'<|4|>',
'<|5|>',
'<|6|>',
'<|7|>',
'<|8|>',
'<|9|>',
'<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/cpsc-debug10',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3000000000.0,
'output_dir': 'training_output_3',
'per_device_train_batch_size': 4,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 22888,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/gtoaiaa8 |
Your-Cheese/ppo-LunarLander-v2-Unit8 | Your-Cheese | 2023-02-27T08:41:20Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T08:19:32Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -28.83 +/- 21.95
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'env_id': 'LunarLander-v2'
'learning_rate': 0.00025
'seed': 1
'total_timesteps': 1000000
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'num_envs': 16
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Your-Cheese/ppo-LunarLander-v2-Unit8'
'batch_size': 2048
'minibatch_size': 512}
```
|
ChhayaKumarDas/q-FrozenLake-v1-4x4-noSlippery | ChhayaKumarDas | 2023-02-27T08:08:44Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T06:40:30Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ChhayaKumarDas/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
zambezivoice/xls-r-300m-loz-pl-nst | zambezivoice | 2023-02-27T07:55:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-26T19:48:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: xls-r-300m-loz-pl-nst
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-loz-pl-nst
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4529
- Wer: 0.3638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.2874 | 1.66 | 500 | 0.6896 | 0.6004 |
| 0.707 | 3.32 | 1000 | 0.4671 | 0.4167 |
| 0.4504 | 4.98 | 1500 | 0.4529 | 0.3638 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
lvhoang/out_anna | lvhoang | 2023-02-27T07:23:15Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-02-27T07:19:25Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of anna person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - lvhoang/out_anna
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of anna person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




|
sgedela/dilbert-comic-model-v1.0 | sgedela | 2023-02-27T07:12:56Z | 0 | 0 | diffusers | [
"diffusers",
"art",
"en",
"dataset:Ali-fb/dilbert-comic-sample-dataset",
"license:openrail",
"region:us"
]
| null | 2023-02-27T07:10:42Z | ---
license: openrail
datasets:
- Ali-fb/dilbert-comic-sample-dataset
language:
- en
library_name: diffusers
tags:
- art
--- |
sanak/ppo-LunarLander-v2-TEST | sanak | 2023-02-27T07:09:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T07:09:11Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.05 +/- 18.26
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ksathur/bert-finetuned-squad-v2 | ksathur | 2023-02-27T07:03:10Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-24T04:10:42Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ksathur/bert-finetuned-squad-v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksathur/bert-finetuned-squad-v2
This model is a fine-tuned version of [ksathur/bert-finetuned-squad-v2](https://huggingface.co/ksathur/bert-finetuned-squad-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1107
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 54960, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.1107 | 0 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.0
- Tokenizers 0.13.2
|
nolanaatama/urpm13 | nolanaatama | 2023-02-27T06:36:27Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-27T06:17:46Z | ---
license: creativeml-openrail-m
---
|
vieveks/ppo-LunarLander-v2 | vieveks | 2023-02-27T06:29:44Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T06:29:03Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -168.42 +/- 66.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kelestemur/a2c-PandaReachDense-v2 | kelestemur | 2023-02-27T06:17:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T06:15:05Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.65 +/- 0.63
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bond005/wav2vec2-large-ru-golos | bond005 | 2023-02-27T06:17:29Z | 778 | 12 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ru",
"dataset:SberDevices/Golos",
"dataset:bond005/sova_rudevices",
"dataset:bond005/rulibrispeech",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-06-21T15:26:37Z | ---
language: ru
datasets:
- SberDevices/Golos
- bond005/sova_rudevices
- bond005/rulibrispeech
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
widget:
- example_title: test sound with Russian speech "нейросети это хорошо"
src: https://huggingface.co/bond005/wav2vec2-large-ru-golos/resolve/main/test_sound_ru.flac
model-index:
- name: XLSR Wav2Vec2 Russian by Ivan Bondarenko
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sberdevices Golos (crowd)
type: SberDevices/Golos
args: ru
metrics:
- name: Test WER
type: wer
value: 10.144
- name: Test CER
type: cer
value: 2.168
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sberdevices Golos (farfield)
type: SberDevices/Golos
args: ru
metrics:
- name: Test WER
type: wer
value: 20.353
- name: Test CER
type: cer
value: 6.030
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ru
type: common_voice
args: ru
metrics:
- name: Test WER
type: wer
value: 18.548
- name: Test CER
type: cer
value: 4.000
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sova RuDevices
type: bond005/sova_rudevices
args: ru
metrics:
- name: Test WER
type: wer
value: 25.410
- name: Test CER
type: cer
value: 7.965
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Russian Librispeech
type: bond005/rulibrispeech
args: ru
metrics:
- name: Test WER
type: wer
value: 21.872
- name: Test CER
type: cer
value: 4.469
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Voxforge Ru
type: dangrebenkin/voxforge-ru-dataset
args: ru
metrics:
- name: Test WER
type: wer
value: 27.084
- name: Test CER
type: cer
value: 6.986
---
# Wav2Vec2-Large-Ru-Golos
The Wav2Vec2 model is based on [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53), fine-tuned in Russian using [Sberdevices Golos](https://huggingface.co/datasets/SberDevices/Golos) with audio augmentations like as pitch shift, acceleration/deceleration of sound, reverberation etc.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("bond005/wav2vec2-large-ru-golos")
model = Wav2Vec2ForCTC.from_pretrained("bond005/wav2vec2-large-ru-golos")
# load the test part of Golos dataset and read first soundfile
ds = load_dataset("bond005/sberdevices_golos_10h_crowd", split="test")
# tokenize
processed = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest") # Batch size 1
# retrieve logits
logits = model(processed.input_values, attention_mask=processed.attention_mask).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)[0]
print(transcription)
```
## Evaluation
This code snippet shows how to evaluate **bond005/wav2vec2-large-ru-golos** on Golos dataset's "crowd" and "farfield" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer, cer # we need word error rate (WER) and character error rate (CER)
# load the test part of Golos Crowd and remove samples with empty "true" transcriptions
golos_crowd_test = load_dataset("bond005/sberdevices_golos_10h_crowd", split="test")
golos_crowd_test = golos_crowd_test.filter(
lambda it1: (it1["transcription"] is not None) and (len(it1["transcription"].strip()) > 0)
)
# load the test part of Golos Farfield and remove sampels with empty "true" transcriptions
golos_farfield_test = load_dataset("bond005/sberdevices_golos_100h_farfield", split="test")
golos_farfield_test = golos_farfield_test.filter(
lambda it2: (it2["transcription"] is not None) and (len(it2["transcription"].strip()) > 0)
)
# load model and tokenizer
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
# recognize one sound
def map_to_pred(batch):
# tokenize and vectorize
processed = processor(
batch["audio"]["array"], sampling_rate=batch["audio"]["sampling_rate"],
return_tensors="pt", padding="longest"
)
input_values = processed.input_values.to("cuda")
attention_mask = processed.attention_mask.to("cuda")
# recognize
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
# decode
transcription = processor.batch_decode(predicted_ids)
batch["text"] = transcription[0]
return batch
# calculate WER and CER on the crowd domain
crowd_result = golos_crowd_test.map(map_to_pred, remove_columns=["audio"])
crowd_wer = wer(crowd_result["transcription"], crowd_result["text"])
crowd_cer = cer(crowd_result["transcription"], crowd_result["text"])
print("Word error rate on the Crowd domain:", crowd_wer)
print("Character error rate on the Crowd domain:", crowd_cer)
# calculate WER and CER on the farfield domain
farfield_result = golos_farfield_test.map(map_to_pred, remove_columns=["audio"])
farfield_wer = wer(farfield_result["transcription"], farfield_result["text"])
farfield_cer = cer(farfield_result["transcription"], farfield_result["text"])
print("Word error rate on the Farfield domain:", farfield_wer)
print("Character error rate on the Farfield domain:", farfield_cer)
```
*Result (WER, %)*:
| "crowd" | "farfield" |
|---------|------------|
| 10.144 | 20.353 |
*Result (CER, %)*:
| "crowd" | "farfield" |
|---------|------------|
| 2.168 | 6.030 |
You can see the evaluation script on other datasets, including Russian Librispeech and SOVA RuDevices, on my Kaggle web-page https://www.kaggle.com/code/bond005/wav2vec2-ru-eval
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{bondarenko2022wav2vec2-large-ru-golos,
title={XLSR Wav2Vec2 Russian by Ivan Bondarenko},
author={Bondarenko, Ivan},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/bond005/wav2vec2-large-ru-golos}},
year={2022}
}
```
|
pytest/distilbert-base-uncased-finetuned-ner | pytest | 2023-02-27T06:11:36Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-27T01:37:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9284605146406388
- name: Recall
type: recall
value: 0.9364582168027744
- name: F1
type: f1
value: 0.932442216652743
- name: Accuracy
type: accuracy
value: 0.983668800737128
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0599
- Precision: 0.9285
- Recall: 0.9365
- F1: 0.9324
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2277 | 1.0 | 878 | 0.0667 | 0.9179 | 0.9218 | 0.9198 | 0.9815 |
| 0.0527 | 2.0 | 1756 | 0.0594 | 0.9253 | 0.9341 | 0.9297 | 0.9833 |
| 0.03 | 3.0 | 2634 | 0.0599 | 0.9285 | 0.9365 | 0.9324 | 0.9837 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
bond005/wav2vec2-large-ru-golos-with-lm | bond005 | 2023-02-27T06:08:09Z | 958 | 13 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"common_voice",
"SberDevices/Golos",
"bond005/rulibrispeech",
"bond005/sova_rudevices",
"dangrebenkin/voxforge-ru-dataset",
"ru",
"dataset:SberDevices/Golos",
"dataset:common_voice",
"dataset:bond005/rulibrispeech",
"dataset:bond005/sova_rudevices",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-26T14:44:38Z | ---
language: ru
datasets:
- SberDevices/Golos
- common_voice
- bond005/rulibrispeech
- bond005/sova_rudevices
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- common_voice
- SberDevices/Golos
- bond005/rulibrispeech
- bond005/sova_rudevices
- dangrebenkin/voxforge-ru-dataset
license: apache-2.0
widget:
- example_title: test Russian speech "нейросети это хорошо" (in English, "neural networks are good")
src: https://huggingface.co/bond005/wav2vec2-large-ru-golos-with-lm/resolve/main/test_sound_ru.flac
model-index:
- name: XLSR Wav2Vec2 Russian with Language Model by Ivan Bondarenko
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sberdevices Golos (crowd)
type: SberDevices/Golos
args: ru
metrics:
- name: Test WER
type: wer
value: 6.883
- name: Test CER
type: cer
value: 1.637
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sberdevices Golos (farfield)
type: SberDevices/Golos
args: ru
metrics:
- name: Test WER
type: wer
value: 15.044
- name: Test CER
type: cer
value: 5.128
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ru
type: common_voice
args: ru
metrics:
- name: Test WER
type: wer
value: 12.115
- name: Test CER
type: cer
value: 2.980
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Russian Librispeech
type: bond005/rulibrispeech
args: ru
metrics:
- name: Test WER
type: wer
value: 15.736
- name: Test CER
type: cer
value: 3.573
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sova RuDevices
type: bond005/sova_rudevices
args: ru
metrics:
- name: Test WER
type: wer
value: 20.652
- name: Test CER
type: cer
value: 7.287
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Voxforge Ru
type: dangrebenkin/voxforge-ru-dataset
args: ru
metrics:
- name: Test WER
type: wer
value: 19.079
- name: Test CER
type: cer
value: 5.864
---
# Wav2Vec2-Large-Ru-Golos-With-LM
The Wav2Vec2 model is based on [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53), fine-tuned in Russian using [Sberdevices Golos](https://huggingface.co/datasets/SberDevices/Golos) with audio augmentations like as pitch shift, acceleration/deceleration of sound, reverberation etc.
The 2-gram language model is built on the Russian text corpus obtained from three open sources:
- random 10% subset of [Taiga](https://tatianashavrina.github.io/taiga_site)
- [Russian Wikipedia](https://ru.wikipedia.org)
- [Russian Wikinews](https://ru.wikinews.org).
## Usage
When using this model, make sure that your speech input is sampled at 16kHz.
You can use this model by writing your own inference script:
```python
import os
import warnings
import librosa
import nltk
import numpy as np
import torch
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM
MODEL_ID = "bond005/wav2vec2-large-ru-golos-with-lm"
DATASET_ID = "bond005/sberdevices_golos_10h_crowd"
SAMPLES = 30
nltk.download('punkt')
num_processes = max(1, os.cpu_count())
test_dataset = load_dataset(DATASET_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2ProcessorWithLM.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array = batch["audio"]["array"]
batch["speech"] = np.asarray(speech_array, dtype=np.float32)
return batch
removed_columns = set(test_dataset.column_names)
removed_columns -= {'transcription', 'speech'}
removed_columns = sorted(list(removed_columns))
with warnings.catch_warnings():
warnings.simplefilter("ignore")
test_dataset = test_dataset.map(
speech_file_to_array_fn,
num_proc=num_processes,
remove_columns=removed_columns
)
inputs = processor(test_dataset["speech"], sampling_rate=16_000,
return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values,
attention_mask=inputs.attention_mask).logits
predicted_sentences = processor.batch_decode(
logits=logits.numpy(),
num_processes=num_processes
).text
with warnings.catch_warnings():
warnings.simplefilter("ignore")
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["transcription"])
print("Prediction:", predicted_sentence)
```
```text
----------------------------------------------------------------------------------------------------
Reference: шестьдесят тысяч тенге сколько будет стоить
Prediction: шестьдесят тысяч тенге сколько будет стоить
----------------------------------------------------------------------------------------------------
Reference: покажи мне на смотрешке телеканал синергия тв
Prediction: покажи мне на смотрешке телеканал синергия тв
----------------------------------------------------------------------------------------------------
Reference: заказать яблоки зеленые
Prediction: заказать яблоки зеленые
----------------------------------------------------------------------------------------------------
Reference: алиса закажи килограммовый торт графские развалины
Prediction: алиса закажи килограммовый торт графские развалины
----------------------------------------------------------------------------------------------------
Reference: ищи телеканал про бизнес на тиви
Prediction: ищи телеканал про бизнес на тиви
----------------------------------------------------------------------------------------------------
Reference: михаила мурадяна
Prediction: михаила мурадяна
----------------------------------------------------------------------------------------------------
Reference: любовницы две тысячи тринадцать пятнадцатый сезон
Prediction: любовница две тысячи тринадцать пятнадцатый сезон
----------------------------------------------------------------------------------------------------
Reference: найди боевики
Prediction: найди боевики
----------------------------------------------------------------------------------------------------
Reference: гетто сезон три
Prediction: гета сезон три
----------------------------------------------------------------------------------------------------
Reference: хочу посмотреть ростов папа на телевизоре
Prediction: хочу посмотреть ростоу папа на телевизоре
----------------------------------------------------------------------------------------------------
Reference: сбер какое твое самое ненавистное занятие
Prediction: сбер какое твое самое ненавистное занятие
----------------------------------------------------------------------------------------------------
Reference: афина чем платят у китайцев
Prediction: афина чем платят у китайцев
----------------------------------------------------------------------------------------------------
Reference: джой как работает досрочное погашение кредита
Prediction: джой как работает досрочное погашение кредита
----------------------------------------------------------------------------------------------------
Reference: у тебя найдется люк кейдж
Prediction: у тебя найдется люк кейдж
----------------------------------------------------------------------------------------------------
Reference: у тебя будет лучшая часть пинк
Prediction: у тебя будет лучшая часть пинк
----------------------------------------------------------------------------------------------------
Reference: пожалуйста пополните мне счет
Prediction: пожалуйста пополните мне счет
----------------------------------------------------------------------------------------------------
Reference: анне павловне шабуровой
Prediction: анне павловне шабуровой
----------------------------------------------------------------------------------------------------
Reference: врубай на смотрешке муз тв
Prediction: врубай на смотрешке муз тиви
----------------------------------------------------------------------------------------------------
Reference: найди на смотрешке лдпр тв
Prediction: найди на смотрешке лдпр тв
----------------------------------------------------------------------------------------------------
Reference: сбер мне нужен педикюр забей мне место
Prediction: сбер мне нужен педикюр за обеление место
----------------------------------------------------------------------------------------------------
Reference: галины афанасьевны
Prediction: галины афанасьевны
----------------------------------------------------------------------------------------------------
Reference: сколько стоимость обмена китайского юаня на российский рубль
Prediction: сколько стоимость обмена китайского юаня на российский рубль
----------------------------------------------------------------------------------------------------
Reference: обмани меня сезон восемь часть тринадцать
Prediction: обмани меня сезон восемь часть тринадцать
----------------------------------------------------------------------------------------------------
Reference: включи канал футбол эйч ди
Prediction: включи канал футбол эйч ди
----------------------------------------------------------------------------------------------------
Reference: поп звезда не переставай не останавливайся найти
Prediction: поп звезда переставая не останавливайся найти
----------------------------------------------------------------------------------------------------
Reference: салют самый популярный фильм люка бессона
Prediction: салют самый популярный фильм люка бессона
----------------------------------------------------------------------------------------------------
Reference: татьяна зиганшина
Prediction: татьяна зигантшина
----------------------------------------------------------------------------------------------------
Reference: джой когда перестало существовать хеттское царство
Prediction: джой когда перестало существовать хеттское царство
----------------------------------------------------------------------------------------------------
Reference: олег яковлев
Prediction: олег яковлев
----------------------------------------------------------------------------------------------------
Reference: посоветуй мне шестая часть как избежать наказания за убийство
Prediction: посоветуй мне шестая часть как избежать наказания за убийство
```
The Google Colab version of [this script](https://colab.research.google.com/drive/1SnQmrt6HmMNV-zK-UCPajuwl1JvoCqbX?usp=sharing) is available too.
## Evaluation
This model was evaluated on the test subsets of [SberDevices Golos](https://huggingface.co/datasets/SberDevices/Golos), [Common Voice 6.0](https://huggingface.co/datasets/common_voice) (Russian part), and [Russian Librispeech](https://huggingface.co/datasets/bond005/rulibrispeech), but it was trained on the training subset of SberDevices Golos only. You can see the evaluation script on other datasets, including Russian Librispeech and SOVA RuDevices, on my Kaggle web-page https://www.kaggle.com/code/bond005/wav2vec2-ru-lm-eval
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{bondarenko2022wav2vec2-large-ru-golos,
title={XLSR Wav2Vec2 Russian with 2-gram Language Model by Ivan Bondarenko},
author={Bondarenko, Ivan},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/bond005/wav2vec2-large-ru-golos-with-lm}},
year={2022}
}
```
|
smartmind/doctr-vitstr_base-recognition | smartmind | 2023-02-27T05:51:26Z | 6 | 0 | doctr | [
"doctr",
"pytorch",
"ko",
"region:us"
]
| null | 2023-01-16T00:11:46Z | ---
language:
- ko
library_name: doctr
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
``` |
smartmind/doctr-db_resnet50 | smartmind | 2023-02-27T05:49:28Z | 166 | 1 | doctr | [
"doctr",
"pytorch",
"ko",
"region:us"
]
| null | 2023-02-27T04:42:43Z | ---
language:
- ko
library_name: doctr
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
``` |
gyeoldere/DeBERTa-finetuned-SNLI4 | gyeoldere | 2023-02-27T05:19:31Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"deberta",
"generated_from_trainer",
"dataset:snli",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2023-02-16T07:19:33Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- snli
model-index:
- name: DeBERTa-finetuned-SNLI4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa-finetuned-SNLI4
This model is a fine-tuned version of [gyeoldere/DeBERTa-finetuned-SNLI2](https://huggingface.co/gyeoldere/DeBERTa-finetuned-SNLI2) on the snli dataset.
## Model description
fliped_forth used
## Intended uses & limitations
More information needed
## Training and evaluation data
final training loss : 1.216
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
luolirui/my_awesome_eli5_clm-model3 | luolirui | 2023-02-27T05:19:01Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-27T03:54:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model3
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7001 | 1.0 | 13178 | 0.6977 |
| 0.7005 | 2.0 | 26356 | 0.6938 |
| 0.6964 | 3.0 | 39534 | 0.6945 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1
- Datasets 2.10.0
- Tokenizers 0.13.2
|
kelestemur/a2c-AntBulletEnv-v0 | kelestemur | 2023-02-27T05:18:00Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T05:16:45Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1777.32 +/- 23.42
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LarryAIDraw/weriDiffusion_v10 | LarryAIDraw | 2023-02-27T04:32:26Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-27T04:01:29Z | ---
license: creativeml-openrail-m
---
|
LucaReggiani/t5-small-nlpfinalproject99-xsum | LucaReggiani | 2023-02-27T04:10:39Z | 62 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-27T03:55:04Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: LucaReggiani/t5-small-nlpfinalproject99-xsum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LucaReggiani/t5-small-nlpfinalproject99-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.0379
- Validation Loss: 2.9903
- Train Rouge1: 23.6196
- Train Rouge2: 5.8829
- Train Rougel: 18.9509
- Train Rougelsum: 19.0041
- Train Gen Len: 18.6
- Epoch: 10
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.98, 'epsilon': 1e-06, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 3.8865 | 3.3185 | 17.9926 | 2.6334 | 14.3776 | 14.4109 | 18.74 | 0 |
| 3.5092 | 3.1756 | 19.9492 | 3.6172 | 15.6914 | 15.7191 | 18.31 | 1 |
| 3.4012 | 3.1160 | 21.2372 | 4.0016 | 16.5756 | 16.5655 | 18.45 | 2 |
| 3.3268 | 3.0809 | 21.5751 | 4.0776 | 16.5050 | 16.5345 | 18.58 | 3 |
| 3.2660 | 3.0550 | 21.7071 | 4.1832 | 16.8604 | 16.8708 | 18.64 | 4 |
| 3.2125 | 3.0377 | 21.9791 | 4.8202 | 17.3234 | 17.3660 | 18.46 | 5 |
| 3.1829 | 3.0218 | 22.4277 | 5.0402 | 17.7633 | 17.8109 | 18.64 | 6 |
| 3.1358 | 3.0142 | 23.5653 | 5.3418 | 18.8989 | 18.9198 | 18.64 | 7 |
| 3.1011 | 3.0042 | 23.1459 | 5.0797 | 18.3238 | 18.3087 | 18.62 | 8 |
| 3.0681 | 2.9995 | 22.9719 | 4.9597 | 17.9675 | 17.9490 | 18.57 | 9 |
| 3.0379 | 2.9903 | 23.6196 | 5.8829 | 18.9509 | 19.0041 | 18.6 | 10 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.0
- Tokenizers 0.13.2
|
dongpil/my-awesome-setfit-model | dongpil | 2023-02-27T03:36:03Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-02-27T03:34:19Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# dongpil/my-awesome-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("dongpil/my-awesome-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Yaoyu/sd-class-butterflies-64-accelerate | Yaoyu | 2023-02-27T03:33:08Z | 30 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2023-02-27T03:30:23Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Yaoyu/sd-class-butterflies-64-accelerate')
image = pipeline().images[0]
image
```
|
Yaoyu/sd-class-butterflies-64 | Yaoyu | 2023-02-27T03:30:02Z | 30 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2023-02-27T01:46:50Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Yaoyu/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
dyingc/Taxi-v3 | dyingc | 2023-02-27T02:20:08Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-26T21:52:58Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dyingc/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
boost/PPO-LunarLander-v2 | boost | 2023-02-27T02:03:50Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T00:28:07Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO-bigger
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 283.47 +/- 17.30
name: mean_reward
verified: false
---
# **PPO-bigger** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-bigger** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
afaji/fine-tuned-IndoNLI-Translated-with-indobert-large-p2 | afaji | 2023-02-27T01:59:11Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-25T12:36:03Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine-tuned-IndoNLI-Translated-with-indobert-large-p2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Translated-with-indobert-large-p2
This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6126
- Accuracy: 0.8090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.549 | 1.0 | 6136 | 0.5307 | 0.7896 |
| 0.498 | 2.0 | 12272 | 0.4908 | 0.8072 |
| 0.3704 | 3.0 | 18408 | 0.5087 | 0.8105 |
| 0.3102 | 4.0 | 24544 | 0.5708 | 0.8111 |
| 0.2226 | 5.0 | 30680 | 0.6435 | 0.8053 |
| 0.1601 | 6.0 | 36816 | 0.7676 | 0.8034 |
| 0.1133 | 7.0 | 42952 | 0.8197 | 0.8083 |
| 0.1091 | 8.0 | 49088 | 0.9384 | 0.8059 |
| 0.066 | 9.0 | 55224 | 1.0333 | 0.8066 |
| 0.058 | 10.0 | 61360 | 1.1211 | 0.8061 |
| 0.0539 | 11.0 | 67496 | 1.2260 | 0.8080 |
| 0.0357 | 12.0 | 73632 | 1.3470 | 0.8058 |
| 0.0256 | 13.0 | 79768 | 1.4499 | 0.8079 |
| 0.0289 | 14.0 | 85904 | 1.5078 | 0.8070 |
| 0.0259 | 15.0 | 92040 | 1.5818 | 0.8078 |
| 0.0193 | 16.0 | 98176 | 1.6126 | 0.8090 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
Timoti/Uber_Realistic_Porn_Merge13 | Timoti | 2023-02-27T01:49:57Z | 0 | 27 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-26T23:55:07Z | ---
license: creativeml-openrail-m
---
|
emylrahim/Reinforce-CartPole-v1 | emylrahim | 2023-02-27T01:07:26Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T01:07:16Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rodrfons/taxi-v3 | rodrfons | 2023-02-27T00:19:34Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T00:19:31Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rodrfons/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sdeg/gpt2-finetuned-v3-seinfeld | sdeg | 2023-02-27T00:14:57Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-27T00:08:32Z | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-v3-seinfeld
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-v3-seinfeld
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8039
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3507 | 1.09 | 25 | 2.7998 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
g30rv17ys/hkdb-wamd-sdft | g30rv17ys | 2023-02-27T00:09:52Z | 7 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-27T00:04:46Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### HKDB_WAMD_SDFT Dreambooth model trained by geevegeorge with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
ctebright/ppo-Huggy | ctebright | 2023-02-26T23:38:03Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-02-26T23:37:53Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ctebright/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ahmmu20/bliznyashkiTheTwins | ahmmu20 | 2023-02-26T23:31:21Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-26T23:28:53Z | ---
license: creativeml-openrail-m
---
This is a LORA file -- not mine -- uploaded here to use in Colab
Check the Civitai page for more info https://civitai.com/models/7613/bliznyashki-the-twins-atomic-heart |
iblub/poca-SoccerTwos | iblub | 2023-02-26T22:46:45Z | 17 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-26T22:46:03Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: iblub/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Yagorka/ddpm-pokemons-128_300_epochs_1000_steps_real_cont | Yagorka | 2023-02-26T22:43:36Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2023-02-26T10:14:05Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-pokemons-128_300_epochs_1000_steps_real_cont
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 11
- eval_batch_size: 12
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Yagorka/ddpm-pokemons-128_300_epochs_1000_steps_real_cont/tensorboard?#scalars)
|
harshil128/Reinforce-model1 | harshil128 | 2023-02-26T22:37:09Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-26T22:36:58Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-model1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 353.30 +/- 73.52
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
KubiakJakub01/a2c-PandaReachDense-v2 | KubiakJakub01 | 2023-02-26T22:32:54Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-25T20:32:06Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.08 +/- 0.33
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ctebright/ppo-LunarLander-v2 | ctebright | 2023-02-26T22:31:36Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-26T21:55:57Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.88 +/- 20.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
GraceEmily24/Dog | GraceEmily24 | 2023-02-26T22:30:46Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
]
| null | 2023-02-26T22:30:46Z | ---
license: bigscience-openrail-m
---
|
G-e-o-r-g-e/a2c-AntBulletEnv-v0 | G-e-o-r-g-e | 2023-02-26T22:27:28Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-26T22:26:16Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 961.13 +/- 105.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
polejowska/detr-resnet-50-CD45RB-100 | polejowska | 2023-02-26T22:06:32Z | 29 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2023-02-26T18:06:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50-CD45RB-100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-CD45RB-100
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1316 | 1.0 | 94 | 2.3431 |
| 2.812 | 2.0 | 188 | 2.2115 |
| 2.8118 | 3.0 | 282 | 1.9844 |
| 2.5555 | 4.0 | 376 | 1.9309 |
| 2.4803 | 5.0 | 470 | 1.8790 |
| 2.5099 | 6.0 | 564 | 2.0294 |
| 2.5365 | 7.0 | 658 | 1.8845 |
| 2.4593 | 8.0 | 752 | 1.8699 |
| 2.4248 | 9.0 | 846 | 1.7946 |
| 2.4017 | 10.0 | 940 | 1.7905 |
| 2.4523 | 11.0 | 1034 | 1.8319 |
| 2.4407 | 12.0 | 1128 | 1.8370 |
| 2.3727 | 13.0 | 1222 | 1.8001 |
| 2.317 | 14.0 | 1316 | 1.7492 |
| 2.3292 | 15.0 | 1410 | 1.7531 |
| 2.3086 | 16.0 | 1504 | 1.7637 |
| 2.3175 | 17.0 | 1598 | 1.7302 |
| 2.3002 | 18.0 | 1692 | 1.7216 |
| 2.2756 | 19.0 | 1786 | 1.7345 |
| 2.2656 | 20.0 | 1880 | 1.7225 |
| 2.3083 | 21.0 | 1974 | 1.7549 |
| 2.2542 | 22.0 | 2068 | 1.7175 |
| 2.2262 | 23.0 | 2162 | 1.6998 |
| 2.2644 | 24.0 | 2256 | 1.7020 |
| 2.2392 | 25.0 | 2350 | 1.6933 |
| 2.228 | 26.0 | 2444 | 1.7434 |
| 2.2284 | 27.0 | 2538 | 1.7070 |
| 2.2019 | 28.0 | 2632 | 1.6977 |
| 2.1804 | 29.0 | 2726 | 1.6867 |
| 2.1939 | 30.0 | 2820 | 1.6859 |
| 2.1863 | 31.0 | 2914 | 1.6802 |
| 2.2009 | 32.0 | 3008 | 1.6940 |
| 2.1894 | 33.0 | 3102 | 1.6720 |
| 2.1759 | 34.0 | 3196 | 1.6700 |
| 2.1575 | 35.0 | 3290 | 1.6713 |
| 2.1715 | 36.0 | 3384 | 1.7287 |
| 2.2125 | 37.0 | 3478 | 1.6994 |
| 2.2032 | 38.0 | 3572 | 1.6896 |
| 2.21 | 39.0 | 3666 | 1.6793 |
| 2.1837 | 40.0 | 3760 | 1.6747 |
| 2.2136 | 41.0 | 3854 | 1.6728 |
| 2.1825 | 42.0 | 3948 | 1.6641 |
| 2.1419 | 43.0 | 4042 | 1.6829 |
| 2.1695 | 44.0 | 4136 | 1.6625 |
| 2.1478 | 45.0 | 4230 | 1.6680 |
| 2.1464 | 46.0 | 4324 | 1.6795 |
| 2.1809 | 47.0 | 4418 | 1.6775 |
| 2.174 | 48.0 | 4512 | 1.6668 |
| 2.1391 | 49.0 | 4606 | 1.6559 |
| 2.1466 | 50.0 | 4700 | 1.6658 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Minata/plbart-base-finetuned-ut-generator | Minata | 2023-02-26T22:00:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"plbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-24T22:36:17Z | ---
tags:
- generated_from_trainer
model-index:
- name: plbart-base-finetuned-ut-generator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plbart-base-finetuned-ut-generator
This model is a fine-tuned version of [uclanlp/plbart-base](https://huggingface.co/uclanlp/plbart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6096 | 0.44 | 500 | 0.3797 |
| 0.3706 | 0.89 | 1000 | 0.3465 |
| 0.3534 | 1.33 | 1500 | 0.3283 |
| 0.3132 | 1.78 | 2000 | 0.3142 |
| 0.305 | 2.22 | 2500 | 0.3044 |
| 0.2923 | 2.67 | 3000 | 0.2972 |
| 0.2908 | 3.11 | 3500 | 0.2911 |
| 0.2796 | 3.56 | 4000 | 0.2856 |
| 0.2731 | 4.0 | 4500 | 0.2814 |
| 0.2663 | 4.44 | 5000 | 0.2785 |
| 0.2638 | 4.89 | 5500 | 0.2764 |
| 0.2597 | 5.33 | 6000 | 0.2750 |
| 0.2522 | 5.78 | 6500 | 0.2744 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
polejowska/yolos-tiny-CD45RB-1000 | polejowska | 2023-02-26T22:00:14Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"yolos",
"object-detection",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2023-02-26T20:20:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: yolos-tiny-CD45RB-1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolos-tiny-CD45RB-1000
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4965 | 1.0 | 94 | 2.7799 |
| 3.4526 | 2.0 | 188 | 2.7380 |
| 3.4012 | 3.0 | 282 | 2.6721 |
| 3.2776 | 4.0 | 376 | 2.6651 |
| 3.2164 | 5.0 | 470 | 2.6555 |
| 3.2701 | 6.0 | 564 | 2.6489 |
| 3.1847 | 7.0 | 658 | 2.6993 |
| 3.0959 | 8.0 | 752 | 2.6364 |
| 3.0506 | 9.0 | 846 | 2.6464 |
| 3.0497 | 10.0 | 940 | 2.6304 |
| 3.0767 | 11.0 | 1034 | 2.6344 |
| 3.0397 | 12.0 | 1128 | 2.6142 |
| 2.982 | 13.0 | 1222 | 2.6787 |
| 2.883 | 14.0 | 1316 | 2.6492 |
| 2.8978 | 15.0 | 1410 | 2.6317 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
ivanlai/mt5-summarize-ch_trad | ivanlai | 2023-02-26T21:57:48Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xlsum",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-18T12:14:16Z | ---
tags:
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mt5-summarize-ch_trad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-summarize-ch_trad
This model is a fine-tuned version of [uer/t5-small-chinese-cluecorpussmall](https://huggingface.co/uer/t5-small-chinese-cluecorpussmall) on the xlsum dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.2489
- eval_rouge1: 0.1313
- eval_rouge2: 0.0505
- eval_rougeL: 0.1275
- eval_rougeLsum: 0.1272
- eval_gen_len: 128.0
- eval_runtime: 541.5872
- eval_samples_per_second: 8.623
- eval_steps_per_second: 0.27
- epoch: 7.71
- step: 9000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
Liapunov/poca-SoccerTwos | Liapunov | 2023-02-26T21:21:01Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-26T21:20:49Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Liapunov/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Convolution/rl_course_vizdoom_health_gathering_supreme | Convolution | 2023-02-26T21:16:40Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-26T21:16:35Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.07 +/- 4.34
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Convolution/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
lora-library/girl-zty-2 | lora-library | 2023-02-26T21:15:01Z | 5 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-02-26T21:14:58Z | ---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: girl_zty_2
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - girl-zty-2
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "girl_zty_2" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: girl_zty_2




|
dyingc/q-FrozenLake-v1-8x8-noSlippery | dyingc | 2023-02-26T21:03:44Z | 0 | 0 | null | [
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-26T21:03:38Z | ---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dyingc/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
damian0815/pashahlis-val-test-1e-6-ep110 | damian0815 | 2023-02-26T20:57:20Z | 7 | 0 | diffusers | [
"diffusers",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-26T19:05:15Z | ---
license: openrail
---
Epoch 110 (overtrained) from training a dataset kindly provided by @pashahlis; see [https://huggingface.co/damian0815/pashahlis-val-test-1e-6-ep30](https://huggingface.co/damian0815/pashahlis-val-test-1e-6-ep30) for more information. |
ahmad-alismail/a2c-PandaReachDense-v2 | ahmad-alismail | 2023-02-26T20:38:12Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-26T20:12:22Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.99 +/- 0.20
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Hyperparameters
```python
# 4
policy = "MultiInputPolicy"
learning_rate=0.001
gamma=0.95
time_steps=100000
...
```
|
domadapter/domain_only_MR_books | domadapter | 2023-02-26T20:37:54Z | 1 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"adapterhub:sentiment/amazon",
"dataset:amazon",
"region:us"
]
| null | 2023-02-26T20:37:47Z | ---
tags:
- bert
- adapterhub:sentiment/amazon
- adapter-transformers
datasets:
- amazon
---
# Adapter `domadapter/domain_only_MR_books` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/amazon](https://adapterhub.ml/explore/sentiment/amazon/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("domadapter/domain_only_MR_books", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
domadapter/domain_only_MR_baby | domadapter | 2023-02-26T20:37:44Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"adapterhub:sentiment/amazon",
"dataset:amazon",
"region:us"
]
| null | 2023-02-26T20:37:35Z | ---
tags:
- bert
- adapterhub:sentiment/amazon
- adapter-transformers
datasets:
- amazon
---
# Adapter `domadapter/domain_only_MR_baby` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/amazon](https://adapterhub.ml/explore/sentiment/amazon/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("domadapter/domain_only_MR_baby", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
domadapter/domain_only_camera_photo_MR | domadapter | 2023-02-26T20:37:21Z | 1 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"adapterhub:sentiment/amazon",
"dataset:amazon",
"region:us"
]
| null | 2023-02-26T20:37:13Z | ---
tags:
- bert
- adapterhub:sentiment/amazon
- adapter-transformers
datasets:
- amazon
---
# Adapter `domadapter/domain_only_camera_photo_MR` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/amazon](https://adapterhub.ml/explore/sentiment/amazon/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("domadapter/domain_only_camera_photo_MR", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
domadapter/domain_only_camera_photo_books | domadapter | 2023-02-26T20:37:10Z | 1 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"adapterhub:sentiment/amazon",
"dataset:amazon",
"region:us"
]
| null | 2023-02-26T20:37:02Z | ---
tags:
- bert
- adapterhub:sentiment/amazon
- adapter-transformers
datasets:
- amazon
---
# Adapter `domadapter/domain_only_camera_photo_books` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/amazon](https://adapterhub.ml/explore/sentiment/amazon/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("domadapter/domain_only_camera_photo_books", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
domadapter/domain_only_books_MR | domadapter | 2023-02-26T20:36:38Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"adapterhub:sentiment/amazon",
"dataset:amazon",
"region:us"
]
| null | 2023-02-26T20:36:30Z | ---
tags:
- bert
- adapterhub:sentiment/amazon
- adapter-transformers
datasets:
- amazon
---
# Adapter `domadapter/domain_only_books_MR` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/amazon](https://adapterhub.ml/explore/sentiment/amazon/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("domadapter/domain_only_books_MR", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
domadapter/domain_only_books_camera_photo | domadapter | 2023-02-26T20:36:27Z | 2 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"adapterhub:sentiment/amazon",
"dataset:amazon",
"region:us"
]
| null | 2023-02-26T20:36:19Z | ---
tags:
- bert
- adapterhub:sentiment/amazon
- adapter-transformers
datasets:
- amazon
---
# Adapter `domadapter/domain_only_books_camera_photo` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/amazon](https://adapterhub.ml/explore/sentiment/amazon/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("domadapter/domain_only_books_camera_photo", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
domadapter/domain_only_books_baby | domadapter | 2023-02-26T20:36:16Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"adapterhub:sentiment/amazon",
"dataset:amazon",
"region:us"
]
| null | 2023-02-26T20:36:09Z | ---
tags:
- bert
- adapterhub:sentiment/amazon
- adapter-transformers
datasets:
- amazon
---
# Adapter `domadapter/domain_only_books_baby` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/amazon](https://adapterhub.ml/explore/sentiment/amazon/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("domadapter/domain_only_books_baby", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
Subsets and Splits