modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 12:28:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 12:26:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ArtYac/a2c-AntBulletEnv-v0 | ArtYac | 2023-02-28T20:51:24Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T20:50:14Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1146.34 +/- 75.95
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Leoxie2000/t5-small-finetuned-xsum | Leoxie2000 | 2023-02-28T20:38:14Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-28T20:18:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 38.7231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9223
- Rouge1: 38.7231
- Rouge2: 16.4719
- Rougel: 32.3585
- Rougelsum: 35.8234
- Gen Len: 16.209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.1235 | 1.0 | 921 | 1.9223 | 38.7231 | 16.4719 | 32.3585 | 35.8234 | 16.209 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
stinoco/a2c-AntBulletEnv-v0 | stinoco | 2023-02-28T20:37:59Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T20:36:43Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1786.76 +/- 84.87
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Qilex/rl_course_vizdoom_health_gathering_supreme | Qilex | 2023-02-28T20:33:15Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T20:33:04Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.99 +/- 1.78
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Qilex/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ihorbilyk/donut-base-remittance | ihorbilyk | 2023-02-28T20:30:14Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2023-02-28T18:49:37Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-remittance
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-remittance
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
polejowska/yolos-tiny-CD45RB-1000-att | polejowska | 2023-02-28T20:14:13Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"yolos",
"object-detection",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2023-02-28T19:47:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: yolos-tiny-CD45RB-1000-att
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolos-tiny-CD45RB-1000-att
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5369 | 1.0 | 94 | 2.7680 |
| 3.4721 | 2.0 | 188 | 2.7605 |
| 3.4243 | 3.0 | 282 | 2.6854 |
| 3.3027 | 4.0 | 376 | 2.6661 |
| 3.2875 | 5.0 | 470 | 2.6692 |
| 3.2959 | 6.0 | 564 | 2.6477 |
| 3.2286 | 7.0 | 658 | 2.6885 |
| 3.1364 | 8.0 | 752 | 2.6583 |
| 3.0872 | 9.0 | 846 | 2.6667 |
| 3.0935 | 10.0 | 940 | 2.6170 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Sorenmc/q-FrozenLake-v1-4x4-noSlippery | Sorenmc | 2023-02-28T20:14:00Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T20:13:57Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Sorenmc/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Draff/Draffs-Loras | Draff | 2023-02-28T20:06:34Z | 0 | 2 | null | [
"license:other",
"region:us"
]
| null | 2023-02-23T13:12:47Z | ---
license: other
---
honestly dont really care too much about what happens to these loras as long as you dont sell them or claim them as your own.
I have no idea what im doing
civit account: https://civitai.com/user/Draff
I'll add previews and stuff soon |
pedroleme/sesh4 | pedroleme | 2023-02-28T19:57:44Z | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-28T19:46:42Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### sesh4 Dreambooth model trained by pedroleme with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
SarvasvaK/Taxi-v3 | SarvasvaK | 2023-02-28T19:41:47Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T19:41:42Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.81
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="SarvasvaK/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
zyoscovits/a2c-PandaReachDense-v2 | zyoscovits | 2023-02-28T19:31:49Z | 9 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-22T20:57:44Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.47 +/- 0.18
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LeoAgis/Reinforce-Copter-1 | LeoAgis | 2023-02-28T19:30:46Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T19:12:04Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Copter-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 35.60 +/- 26.38
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bnowak1831/pp0-LunarLander-v2 | bnowak1831 | 2023-02-28T19:30:33Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T18:49:45Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -125.15 +/- 69.78
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.0002
'num_envs': 16
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 16
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'bnowak1831/pp0-LunarLander-v2'
'batch_size': 2048
'minibatch_size': 128}
```
|
npit/Reinforce-pixelcopter-baseline | npit | 2023-02-28T19:26:38Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T19:26:35Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-baseline
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 16.30 +/- 21.18
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
NCHS/SANDS | NCHS | 2023-02-28T19:26:32Z | 58 | 5 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"doi:10.57967/hf/0414",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-19T22:35:16Z | ---
language:
- en
tags:
- text-classification
license: cc0-1.0
library: Transformers
widget:
- text: "sdfsdfa"
example_title: "Gibberish"
- text: "idkkkkk"
example_title: "Uncertainty"
- text: "Because you asked"
example_title: "Refusal"
- text: "I am a cucumber"
example_title: "High-risk"
- text: "My job went remote and I needed to take care of my kids"
example_title: "Valid"
---
# SANDS
_Semi-Automated Non-response Detection for Surveys_
Non-response detection designed to be used for open-ended survey text in conjunction with human reviewers.
## Model Details
Model Description: This model is a fine-tuned version of the supervised SimCSE BERT base uncased model. It was introduced at [AAPOR](https://www.aapor.org/) 2022 at the talk _Toward a Semi-automated item nonresponse detector model for open-response data_. The model is uncased, so it treats `important`, `Important`, and `ImPoRtAnT` the same.
* Developed by: [National Center for Health Statistics](https://www.cdc.gov/nchs/index.htm), Centers for Disease Control and Prevention
* Model Type: Text Classification
* Language(s): English
* License: Apache-2.0
Parent Model: For more details about SimCSE, we encourage users to check out the SimCSE [Github repository](https://github.com/princeton-nlp/SimCSE), and the [base model](https://huggingface.co/princeton-nlp/sup-simcse-bert-base-uncased) on HuggingFace.
## How to Get Started with the Model
### Example of classification of a set of responses:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import pandas as pd
# Load the model
model_location = "NCHS/SANDS"
model = AutoModelForSequenceClassification.from_pretrained(model_location)
tokenizer = AutoTokenizer.from_pretrained(model_location)
# Create example responses to test
responses = [
"sdfsdfa",
"idkkkkk",
"Because you asked",
"I am a cucumber",
"My job went remote and I needed to take care of my kids",
]
# Run the model and compute a score for each response
with torch.no_grad():
tokens = tokenizer(responses, padding=True, truncation=True, return_tensors="pt")
output = model(**tokens)
scores = torch.softmax(output.logits, dim=1).numpy()
# Display the scores in a table
columns = ["Gibberish", "Uncertainty", "Refusal", "High-risk", "Valid"]
df = pd.DataFrame(scores, columns=columns)
df.index.name = "Response"
print(df)
```
|Response| Gibberish| Uncertainty| Refusal| High-risk| Valid|
|--------|---------------|-----------------|-----------|-----------------|-----------|
|sdfsdfa| 0.998| 0.000| 0.000| 0.000| 0.000|
|idkkkkk| 0.002| 0.995| 0.001| 0.001| 0.001|
|Because you asked| 0.001| 0.001| 0.976| 0.006| 0.014|
|I am a cucumber| 0.001| 0.001| 0.002| 0.797| 0.178|
|My job went remote and I needed to take care of my kids| 0.000| 0.000| 0.000| 0.000| 1.000|
Alternatively, you can load the model using a pipeline
```python
from transformers import pipeline
pipe = pipeline("text-classification", "NCHS/SANDS")
print( pipe(responses) )
```
```python
[{'label': 'Gibberish', 'score': 0.9978908896446228},
{'label': 'Uncertainty', 'score': 0.9950007796287537},
{'label': 'Refusal', 'score': 0.9775006771087646},
{'label': 'High-risk', 'score': 0.9804121255874634},
{'label': 'Valid', 'score': 0.9997561573982239}]
```
With the pipeline set `top_k` to see all the full output:
```python
pipe(responses, top_k=5)
```
Finally, if you'd like to use a local GPU set the device to the GPU number (usually 0).
```python
pipe = pipeline("text-classification", "NCHS/SANDS", device=0)
```
## Uses
### Direct Uses
This model is intended to be used on survey responses for data cleaning to help researchers filter out non-responsive responses or junk responses to aid in research and analysis. The model will return a score for a response in 5 different categories: Gibberish, Refusal, Uncertainty, High Risk, and Valid as a probability vector that sums to 1.
### Response types
+ **Gibberish**: Nonsensical response where the respondent entered text without regard for English syntax. Examples: `ksdhfkshgk` and `sadsadsadsadsadsadsad`
+ **Refusal**: Responses with valid English but are either a direct refusal to answer the question asked or a response that provides no contextual relationship to the question asked. Examples: `Because` or `Meow`.
+ **Uncertainty**: Responses where the respondent does not understand the question, does not know the answer to the question, or does not know how to respond to the question. Examples: `I dont know` or `unsure what you are asking`.
+ **High-Risk**: Responses that may be valid depending on the context and content of the question. These responses require human subject matter expertise to classify as a valid response or not. Examples: `Necessity` or `I am a cucumber`
+ **Valid**: Responses that answer the question at hand and provide an insight to the respondents thought on the subject matter of the question. Examples: `COVID began for me when my childrenβs school went online and I needed to stay home to watch them` or `staying home, avoiding crowds, still wear masks`
## Misuses and Out-of-scope Use
The model has been trained to specifically identify survey non-response in open ended responses where the respondent taking the survey has given a response but their answer does not respond to the question at hand or providing any meaningful insight. Some examples of these types of responses are `meow`, `ksdhfkshgk`, or `idk`. The model was fine-tuned on 3,000 labeled open-ended responses to web probes on questions relating to the COVID-19 pandemic gathered from the [Research and Development Survey or RANDS](https://www.cdc.gov/nchs/rands/index.htm) conducted by the Division of Research and Methodology at the National Center for Health Statistics. Web probes are questions implementing probing techniques from cognitive interviewing for use in survey question design and are different than traditional open-ended survey questions. The context of our labeled responses limited in focus on both COVID and health responses, so responses outside this scope may notice a drop in performance.
The responses the model is trained on are also from both web and phone based open-ended probes. There may be limitations in model effectiveness with more traditional open ended survey questions with responses provided in other mediums.
This model does not assess the factual accuracy of responses or filter out responses with different demographic biases. It was not trained to be factual of people or events and so using the model for such classification is out of scope for the abilities of the model.
We did not train the model to recognize non-response in any language other than English. Responses in languages other than English are out of scope and the model will perform poorly. Any correct classifications are a result of the base SimCSE or Bert Models.
## Risks, Limitations, and Biases
To investigate if there were differences between demographic groups on sensitivity and specificity, we conducted two-tailed Z-tests across demographic groups. These included education (some college or less and bachelorβs or more), sex (male or female), mode (computer or telephone), race and ethnicity (non-Hispanic White, non-Hispanic Black, Hispanic, and all others who are non-Hispanic), and age (18-29, 30-44, 45-59, and 60+). There were 4,813 responses to 3 probes. To control for family-wise error rate, we applied the Bonferroni correction was applied to the alpha level (Ξ± < 0.00167).
There were statistically significant differences in specificity between education levels, mode, and White and Black respondents. There were no statistically significant differences in sensitivity. Respondents with some college or less had lower specificity compared to those with more education (0.73 versus 0.80, p < 0.0001). Respondents who used a smartphone or computer to complete their survey had a higher specificity than those who completed the survey over the telephone (0.77 versus 0.70, p < 0.0001). Black respondents had a lower specificity than White respondents (0.65 versus 0.78, p < 0.0001). Effect sizes for education and mode were small (h = 0.17 and h = 0.16, respectively) while the effect size for race was between small and medium (h = 0.28).
As the model was fine-tuned from SimCSE, itself fine-tuned from BERT, it will reproduce all biases inherent in these base models. Due to tokenization, the model may incorrectly classify typos, especially in acronyms. For example: `LGBTQ` is valid, while `LBGTQ` is classified as gibberish.
## Training
#### Training Data
The model was fine-tuned on 3,200 labeled open-ended responses from [RANDS during COVID 19 Rounds 1 and 2](https://www.cdc.gov/nchs/rands/index.htm). The base SimCSE BERT model was trained on BookCorpus and English Wikipedia.
#### Training procedure
+ Learning rate: 5e-5
+ Batch size: 16
+ Number training epochs: 4
+ Base Model pooling dimension: 768
+ Number of labels: 5
## Suggested citation
```bibtex
@misc{cibellihibben2023sands,
title={Semi-Automated Nonresponse Detection for Open-text Survey Data},
author={Kristen Cibelli Hibben, Zachary Smith, Ben Rogers, Valerie Ryan, Paul Scanlon, Kristen Miller, Travis Hoppe},
year={2023},
url={https://huggingface.co/NCHS/SANDS},
doi={ 10.57967/hf/0414 }
}
```
## Open source licence
Model and code, including source files and code samples if any in the content, are released as open source under the [Creative Commons Universal Public Domain](https://creativecommons.org/publicdomain/zero/1.0/). This means you can use the code, model, and content in this repository except for any offical trademarks in your own projects.
Open source projects are made available and contributed to under licenses that include terms that, for the protection of contributors, make clear that the projects are offered "as-is", without warranty, and disclaiming liability for damages resulting from using the projects. This model is no different. The open content license it is offered under includes such terms.
|
khaled5321/rl_course_vizdoom_health_gathering_supreme | khaled5321 | 2023-02-28T19:21:03Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-25T19:41:00Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.99 +/- 6.70
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r khaled5321/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Frorozcol/Reinforce-Cartpole-v1 | Frorozcol | 2023-02-28T19:18:04Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T19:17:56Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Taratata/ppo-SnowballTarget | Taratata | 2023-02-28T19:07:45Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-28T19:07:39Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Taratata/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
mqy/mt5-small-finetuned-28feb-1 | mqy | 2023-02-28T19:04:53Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2023-02-28T09:36:17Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-28feb-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-28feb-1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3686
- Rouge1: 20.86
- Rouge2: 6.65
- Rougel: 20.57
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 13
- eval_batch_size: 13
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 3.3725 | 2.09 | 500 | 2.5493 | 17.49 | 5.58 | 17.34 |
| 2.9876 | 4.18 | 1000 | 2.4931 | 18.9 | 5.35 | 18.8 |
| 2.7925 | 6.28 | 1500 | 2.4054 | 18.26 | 5.11 | 18.01 |
| 2.6561 | 8.37 | 2000 | 2.3951 | 19.83 | 5.84 | 19.43 |
| 2.5491 | 10.46 | 2500 | 2.3602 | 19.11 | 5.69 | 18.8 |
| 2.4504 | 12.55 | 3000 | 2.3458 | 20.83 | 6.74 | 20.52 |
| 2.3708 | 14.64 | 3500 | 2.3739 | 20.69 | 6.53 | 20.43 |
| 2.3075 | 16.74 | 4000 | 2.3414 | 19.32 | 6.58 | 19.12 |
| 2.2512 | 18.83 | 4500 | 2.3589 | 19.38 | 6.07 | 19.0 |
| 2.1554 | 20.92 | 5000 | 2.3686 | 20.86 | 6.65 | 20.57 |
| 2.1141 | 23.01 | 5500 | 2.3768 | 20.71 | 6.46 | 20.37 |
| 2.0774 | 25.1 | 6000 | 2.3627 | 20.25 | 6.22 | 20.0 |
| 2.0315 | 27.2 | 6500 | 2.3521 | 20.37 | 6.28 | 20.05 |
| 1.9787 | 29.29 | 7000 | 2.3699 | 20.75 | 6.6 | 20.43 |
| 1.9645 | 31.38 | 7500 | 2.3554 | 20.27 | 5.9 | 20.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
CloXD/RL-PixelCopter-v0 | CloXD | 2023-02-28T18:48:33Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T18:45:55Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: RL-PixelCopter-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 44.40 +/- 29.29
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
pierreguillou/layoutxlm-finetuned-xfund-fr | pierreguillou | 2023-02-28T18:48:14Z | 79 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:xfun",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-28T18:20:29Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- xfun
model-index:
- name: layoutxlm-finetuned-xfund-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-finetuned-xfund-fr
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfun dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.10.0+cu111
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Iamvincent/LunarLander-v2 | Iamvincent | 2023-02-28T18:47:35Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T18:47:28Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -134.65 +/- 44.59
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Iamvincent/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
phonenix/a2c-AntBulletEnv-v0 | phonenix | 2023-02-28T18:43:39Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T18:42:22Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1310.31 +/- 360.56
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
KoRiF/rl_course_vizdoom_health_gathering_supreme | KoRiF | 2023-02-28T18:37:07Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T11:51:59Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.06 +/- 5.91
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r KoRiF/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
matolszew/q-Taxi-v3-default | matolszew | 2023-02-28T18:37:03Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T18:36:52Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-default
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: -92.27 +/- 26.64
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="matolszew/q-Taxi-v3-default", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
imar0/Reinforce-CartPole-v1 | imar0 | 2023-02-28T18:32:45Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T18:32:36Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 464.20 +/- 107.40
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
staycoolish/ppo-Pyramids | staycoolish | 2023-02-28T18:30:26Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-02-28T18:30:20Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: staycoolish/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
Dabe/Taxi | Dabe | 2023-02-28T18:27:35Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T18:27:25Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Dabe/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
robotman0/unit1-ppo-LunarLander-v2 | robotman0 | 2023-02-28T18:23:15Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T17:11:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.06 +/- 16.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dabe/q-FrozenLake-v1-4x4-noSlippery | Dabe | 2023-02-28T18:18:54Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T17:34:46Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Dabe/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Iamvincent/ppo-LunarLander-v2 | Iamvincent | 2023-02-28T18:17:46Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"tensorboard",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-12-07T02:30:47Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.81 +/- 13.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Nnarruqt/ppo-PyramidsTraining | Nnarruqt | 2023-02-28T18:17:30Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-02-28T18:16:08Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: Nnarruqt/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
bnowak1831/rl_course_vizdoom_health_gathering_supreme | bnowak1831 | 2023-02-28T18:08:25Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T18:08:19Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.47 +/- 5.01
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r bnowak1831/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
adam1brownell/u5_snowball | adam1brownell | 2023-02-28T17:59:59Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-28T17:59:53Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: adam1brownell/u5_snowball
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
mitra-mir/setfit_model_Ireland_binary_label1_epochs2 | mitra-mir | 2023-02-28T17:44:30Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-02-26T23:38:45Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 163 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 326,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Lakoc/rl_course_vizdoom_health_gathering_supreme | Lakoc | 2023-02-28T17:40:29Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T17:40:24Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.56 +/- 5.18
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Lakoc/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
xiazeng/rl_course_vizdoom_health_gathering_supreme | xiazeng | 2023-02-28T17:38:23Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T17:38:13Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.62 +/- 2.33
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r xiazeng/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/ddpg-Ant-v3-534515347 | qgallouedec | 2023-02-28T17:17:39Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"Ant-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T17:17:16Z | ---
library_name: stable-baselines3
tags:
- Ant-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v3
type: Ant-v3
metrics:
- type: mean_reward
value: 472.14 +/- 178.61
name: mean_reward
verified: false
---
# **DDPG** Agent playing **Ant-v3**
This is a trained model of a **DDPG** agent playing **Ant-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ddpg --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ddpg --env Ant-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ddpg --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ddpg --env Ant-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ddpg --env Ant-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ddpg --env Ant-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
qgallouedec/ddpg-Ant-v3-1157720158 | qgallouedec | 2023-02-28T17:15:58Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"Ant-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T17:15:37Z | ---
library_name: stable-baselines3
tags:
- Ant-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v3
type: Ant-v3
metrics:
- type: mean_reward
value: 248.80 +/- 287.01
name: mean_reward
verified: false
---
# **DDPG** Agent playing **Ant-v3**
This is a trained model of a **DDPG** agent playing **Ant-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ddpg --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ddpg --env Ant-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ddpg --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ddpg --env Ant-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ddpg --env Ant-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ddpg --env Ant-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
mohamedlamine/wav2vec2-finetuned-wolofdata | mohamedlamine | 2023-02-28T17:15:18Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-28T08:41:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-finetuned-wolofdata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-finetuned-wolofdata
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7747
- Wer: 0.6774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0723 | 0.75 | 100 | 0.7747 | 0.6774 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
qgallouedec/ddpg-Ant-v3-2929305474 | qgallouedec | 2023-02-28T17:13:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"Ant-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T17:12:52Z | ---
library_name: stable-baselines3
tags:
- Ant-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DDPG
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v3
type: Ant-v3
metrics:
- type: mean_reward
value: 642.13 +/- 136.03
name: mean_reward
verified: false
---
# **DDPG** Agent playing **Ant-v3**
This is a trained model of a **DDPG** agent playing **Ant-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ddpg --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ddpg --env Ant-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ddpg --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ddpg --env Ant-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ddpg --env Ant-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ddpg --env Ant-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
xiazeng/ppo-CartPole-v1 | xiazeng | 2023-02-28T17:06:33Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T17:06:28Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -126.85 +/- 90.27
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'xiazeng/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
qgallouedec/tqc-PandaPickAndPlace-v1-3157870761 | qgallouedec | 2023-02-28T17:05:58Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlace-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T17:04:52Z | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v1
type: PandaPickAndPlace-v1
metrics:
- type: mean_reward
value: -7.30 +/- 2.00
name: mean_reward
verified: false
---
# **TQC** Agent playing **PandaPickAndPlace-v1**
This is a trained model of a **TQC** agent playing **PandaPickAndPlace-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env PandaPickAndPlace-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env PandaPickAndPlace-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env PandaPickAndPlace-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env PandaPickAndPlace-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env PandaPickAndPlace-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env PandaPickAndPlace-v1 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 2048),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.95),
('learning_rate', 0.001),
('n_timesteps', 1000000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( goal_selection_strategy='future', n_sampled_goal=4, )"),
('tau', 0.05),
('normalize', False)])
```
# Environment Arguments
```python
{'render': True}
```
|
qgallouedec/tqc-Humanoid-v3-166422443 | qgallouedec | 2023-02-28T17:02:23Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"Humanoid-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T17:01:45Z | ---
library_name: stable-baselines3
tags:
- Humanoid-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v3
type: Humanoid-v3
metrics:
- type: mean_reward
value: 7726.61 +/- 1828.00
name: mean_reward
verified: false
---
# **TQC** Agent playing **Humanoid-v3**
This is a trained model of a **TQC** agent playing **Humanoid-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Humanoid-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Humanoid-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Humanoid-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Humanoid-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 2000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
CloXD/RL-CartPole-v1 | CloXD | 2023-02-28T17:01:41Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:47:04Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: RL-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
qgallouedec/tqc-FetchPush-v1-3251758816 | qgallouedec | 2023-02-28T17:01:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"FetchPush-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T17:01:09Z | ---
library_name: stable-baselines3
tags:
- FetchPush-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FetchPush-v1
type: FetchPush-v1
metrics:
- type: mean_reward
value: -13.70 +/- 11.67
name: mean_reward
verified: false
---
# **TQC** Agent playing **FetchPush-v1**
This is a trained model of a **TQC** agent playing **FetchPush-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env FetchPush-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env FetchPush-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env FetchPush-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env FetchPush-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env FetchPush-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env FetchPush-v1 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 2048),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.95),
('learning_rate', 0.001),
('n_timesteps', 1000000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( goal_selection_strategy='future', n_sampled_goal=4, )"),
('tau', 0.05),
('normalize', False)])
```
|
parsasam/rl_course_vizdoom_health_gathering_supreme | parsasam | 2023-02-28T16:55:35Z | 0 | 0 | sample-factory | [
"sample-factory",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:54:39Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.63 +/- 4.53
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r parsasam/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
qgallouedec/tqc-FetchPickAndPlace-v1-3795610126 | qgallouedec | 2023-02-28T16:53:27Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"FetchPickAndPlace-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:53:03Z | ---
library_name: stable-baselines3
tags:
- FetchPickAndPlace-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FetchPickAndPlace-v1
type: FetchPickAndPlace-v1
metrics:
- type: mean_reward
value: -10.20 +/- 4.62
name: mean_reward
verified: false
---
# **TQC** Agent playing **FetchPickAndPlace-v1**
This is a trained model of a **TQC** agent playing **FetchPickAndPlace-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env FetchPickAndPlace-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env FetchPickAndPlace-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env FetchPickAndPlace-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env FetchPickAndPlace-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env FetchPickAndPlace-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env FetchPickAndPlace-v1 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 2048),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.95),
('learning_rate', 0.001),
('n_timesteps', 1000000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='future', "
'n_sampled_goal=4, )'),
('tau', 0.05),
('normalize', False)])
```
|
qgallouedec/tqc-FetchPush-v1-2077979061 | qgallouedec | 2023-02-28T16:51:21Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"FetchPush-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:50:57Z | ---
library_name: stable-baselines3
tags:
- FetchPush-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FetchPush-v1
type: FetchPush-v1
metrics:
- type: mean_reward
value: -13.70 +/- 12.17
name: mean_reward
verified: false
---
# **TQC** Agent playing **FetchPush-v1**
This is a trained model of a **TQC** agent playing **FetchPush-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env FetchPush-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env FetchPush-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env FetchPush-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env FetchPush-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env FetchPush-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env FetchPush-v1 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 2048),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.95),
('learning_rate', 0.001),
('n_timesteps', 1000000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='future', "
'n_sampled_goal=4, )'),
('tau', 0.05),
('normalize', False)])
```
|
qgallouedec/tqc-FetchPush-v1-3613026928 | qgallouedec | 2023-02-28T16:49:15Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"FetchPush-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:48:53Z | ---
library_name: stable-baselines3
tags:
- FetchPush-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FetchPush-v1
type: FetchPush-v1
metrics:
- type: mean_reward
value: -11.90 +/- 7.49
name: mean_reward
verified: false
---
# **TQC** Agent playing **FetchPush-v1**
This is a trained model of a **TQC** agent playing **FetchPush-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env FetchPush-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env FetchPush-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env FetchPush-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env FetchPush-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env FetchPush-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env FetchPush-v1 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 2048),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.95),
('learning_rate', 0.001),
('n_timesteps', 1000000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='future', "
'n_sampled_goal=4, )'),
('tau', 0.05),
('normalize', False)])
```
|
annelegendre/q-Taxi-v3 | annelegendre | 2023-02-28T16:47:36Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T15:52:14Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="annelegendre/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gokuls/bert_12_layer_model_v2 | gokuls | 2023-02-28T16:47:31Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-27T13:26:19Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1091
- Accuracy: 0.5983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 5.4137 | 1.0 | 45772 | 3.1519 | 0.4605 |
| 2.7951 | 2.0 | 91544 | 2.4478 | 0.5519 |
| 2.4298 | 3.0 | 137316 | 2.2522 | 0.5784 |
| 2.2864 | 4.0 | 183088 | 2.1548 | 0.5920 |
| 2.2142 | 5.0 | 228860 | 2.1091 | 0.5983 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Suprabound/dqn-SpaceInvaders-Suprav1 | Suprabound | 2023-02-28T16:41:40Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:40:59Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 456.50 +/- 177.92
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Suprabound -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Suprabound -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Suprabound
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
mm-ai/vit-mlo-512-breat_composition | mm-ai | 2023-02-28T16:38:50Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:preprocessed1024_config",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-02-28T13:40:08Z | ---
tags:
- generated_from_trainer
datasets:
- preprocessed1024_config
metrics:
- accuracy
- f1
model-index:
- name: vit-mlo-512-breat_composition
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: preprocessed1024_config
type: preprocessed1024_config
args: default
metrics:
- name: Accuracy
type: accuracy
value:
accuracy: 0.5791457286432161
- name: F1
type: f1
value:
f1: 0.5749067914290308
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-mlo-512-breat_composition
This model is a fine-tuned version of [](https://huggingface.co/) on the preprocessed1024_config dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3123
- Accuracy: {'accuracy': 0.5791457286432161}
- F1: {'f1': 0.5749067914290308}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:---------------------------:|
| 1.2679 | 1.0 | 796 | 1.0281 | {'accuracy': 0.5062814070351759} | {'f1': 0.38950358034816535} |
| 0.9805 | 2.0 | 1592 | 0.9240 | {'accuracy': 0.5672110552763819} | {'f1': 0.5273112700912543} |
| 0.9167 | 3.0 | 2388 | 0.9608 | {'accuracy': 0.5477386934673367} | {'f1': 0.45736748568671376} |
| 0.8292 | 4.0 | 3184 | 0.8973 | {'accuracy': 0.5891959798994975} | {'f1': 0.5783349603036094} |
| 0.7695 | 5.0 | 3980 | 1.0477 | {'accuracy': 0.5571608040201005} | {'f1': 0.5379432393338944} |
| 0.6912 | 6.0 | 4776 | 0.9479 | {'accuracy': 0.585427135678392} | {'f1': 0.5766494177636581} |
| 0.61 | 7.0 | 5572 | 1.1280 | {'accuracy': 0.5703517587939698} | {'f1': 0.5560158679652624} |
| 0.5591 | 8.0 | 6368 | 1.1866 | {'accuracy': 0.5741206030150754} | {'f1': 0.5541999644498281} |
| 0.5021 | 9.0 | 7164 | 1.1537 | {'accuracy': 0.582286432160804} | {'f1': 0.566315815243799} |
| 0.4262 | 10.0 | 7960 | 1.3123 | {'accuracy': 0.5791457286432161} | {'f1': 0.5749067914290308} |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bnowak1831/a2c-AntBulletEnv-v0 | bnowak1831 | 2023-02-28T16:35:34Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:34:23Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1608.24 +/- 135.59
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
psaegert/pmtrendviz-tfidf-3m-250-2g | psaegert | 2023-02-28T16:32:48Z | 5 | 0 | transformers | [
"transformers",
"joblib",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2023-02-28T07:38:34Z | ---
license: apache-2.0
---
Medium TF-IDF-based model for [pmtrendviz](https://github.com/psaegert/pmtrendviz)
### Training
- Training Samples: 3,000,000
- `n_components`: 250
- `n_clusters`: 250
- `n_gram_range`: (1, 2) |
Nnarruqt/ppo-SnowBallTarget1 | Nnarruqt | 2023-02-28T16:27:24Z | 20 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:27:18Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Nnarruqt/ppo-SnowBallTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
qgallouedec/ars-Ant-v3-2422697030 | qgallouedec | 2023-02-28T16:23:03Z | 8 | 0 | stable-baselines3 | [
"stable-baselines3",
"Ant-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:22:36Z | ---
library_name: stable-baselines3
tags:
- Ant-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ARS
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v3
type: Ant-v3
metrics:
- type: mean_reward
value: 4762.99 +/- 159.24
name: mean_reward
verified: false
---
# **ARS** Agent playing **Ant-v3**
This is a trained model of a **ARS** agent playing **Ant-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ars --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ars --env Ant-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ars --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ars --env Ant-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ars --env Ant-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ars --env Ant-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('alive_bonus_offset', -1),
('delta_std', 0.025),
('learning_rate', 0.015),
('n_delta', 60),
('n_envs', 1),
('n_timesteps', 75000000.0),
('n_top', 20),
('normalize', 'dict(norm_obs=True, norm_reward=False)'),
('policy', 'LinearPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/ppo_lstm-HumanoidBulletEnv-v0-3214896061 | qgallouedec | 2023-02-28T16:21:10Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"HumanoidBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:20:26Z | ---
library_name: stable-baselines3
tags:
- HumanoidBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: RecurrentPPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HumanoidBulletEnv-v0
type: HumanoidBulletEnv-v0
metrics:
- type: mean_reward
value: 192.17 +/- 64.50
name: mean_reward
verified: false
---
# **RecurrentPPO** Agent playing **HumanoidBulletEnv-v0**
This is a trained model of a **RecurrentPPO** agent playing **HumanoidBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env HumanoidBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env HumanoidBulletEnv-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env HumanoidBulletEnv-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env HumanoidBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo_lstm --env HumanoidBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo_lstm --env HumanoidBulletEnv-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('clip_range', 0.2),
('ent_coef', 0.0),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.00025),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 2048),
('n_timesteps', 10000000.0),
('normalize', True),
('policy', 'MlpLstmPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/ppo_lstm-BipedalWalkerHardcore-v3-3452026630 | qgallouedec | 2023-02-28T16:20:04Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalkerHardcore-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:18:52Z | ---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: RecurrentPPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: -14.95 +/- 35.98
name: mean_reward
verified: false
---
# **RecurrentPPO** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **RecurrentPPO** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.2'),
('ent_coef', 0.001),
('gae_lambda', 0.95),
('gamma', 0.999),
('learning_rate', 'lin_3e-4'),
('n_envs', 32),
('n_epochs', 10),
('n_steps', 256),
('n_timesteps', 100000000.0),
('normalize', True),
('policy', 'MlpLstmPolicy'),
('policy_kwargs',
'dict( ortho_init=False, activation_fn=nn.ReLU, '
'lstm_hidden_size=64, enable_critic_lstm=True, '
'net_arch=dict(pi=[64], vf=[64]) )'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
Anandhulk/pegasus-scientific_lay | Anandhulk | 2023-02-28T16:19:46Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:scientific_lay_summarisation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-24T10:58:34Z | ---
tags:
- generated_from_trainer
datasets:
- scientific_lay_summarisation
model-index:
- name: pegasus-scientific_lay
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-scientific_lay
This model is a fine-tuned version of [Anandhulk/pegasus-scientific_lay](https://huggingface.co/Anandhulk/pegasus-scientific_lay) on the scientific_lay_summarisation dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5004 | 1.0 | 774 | 2.3482 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
qgallouedec/ppo_lstm-BipedalWalkerHardcore-v3-678870063 | qgallouedec | 2023-02-28T16:17:47Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalkerHardcore-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:16:32Z | ---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: RecurrentPPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: -0.10 +/- 0.02
name: mean_reward
verified: false
---
# **RecurrentPPO** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **RecurrentPPO** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.2'),
('ent_coef', 0.001),
('gae_lambda', 0.95),
('gamma', 0.999),
('learning_rate', 'lin_3e-4'),
('n_envs', 32),
('n_epochs', 10),
('n_steps', 256),
('n_timesteps', 100000000.0),
('normalize', True),
('policy', 'MlpLstmPolicy'),
('policy_kwargs',
'dict( ortho_init=False, activation_fn=nn.ReLU, '
'lstm_hidden_size=64, enable_critic_lstm=True, '
'net_arch=dict(pi=[64], vf=[64]) )'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/ppo_lstm-BipedalWalkerHardcore-v3-4163478442 | qgallouedec | 2023-02-28T16:16:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalkerHardcore-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:15:04Z | ---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: RecurrentPPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: -2.85 +/- 0.24
name: mean_reward
verified: false
---
# **RecurrentPPO** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **RecurrentPPO** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.2'),
('ent_coef', 0.001),
('gae_lambda', 0.95),
('gamma', 0.999),
('learning_rate', 'lin_3e-4'),
('n_envs', 32),
('n_epochs', 10),
('n_steps', 256),
('n_timesteps', 100000000.0),
('normalize', True),
('policy', 'MlpLstmPolicy'),
('policy_kwargs',
'dict( ortho_init=False, activation_fn=nn.ReLU, '
'lstm_hidden_size=64, enable_critic_lstm=True, '
'net_arch=dict(pi=[64], vf=[64]) )'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/a2c-BipedalWalkerHardcore-v3-123042218 | qgallouedec | 2023-02-28T16:11:10Z | 8 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalkerHardcore-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:10:36Z | ---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: -3.60 +/- 44.20
name: mean_reward
verified: false
---
# **A2C** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **A2C** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.001),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.0008'),
('max_grad_norm', 0.5),
('n_envs', 32),
('n_steps', 8),
('n_timesteps', 200000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/a2c-BipedalWalkerHardcore-v3-2089306450 | qgallouedec | 2023-02-28T16:10:27Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalkerHardcore-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:09:48Z | ---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: -20.90 +/- 57.48
name: mean_reward
verified: false
---
# **A2C** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **A2C** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.001),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.0008'),
('max_grad_norm', 0.5),
('n_envs', 32),
('n_steps', 8),
('n_timesteps', 200000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/a2c-Humanoid-v3-4227453683 | qgallouedec | 2023-02-28T16:09:12Z | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"Humanoid-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:08:56Z | ---
library_name: stable-baselines3
tags:
- Humanoid-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v3
type: Humanoid-v3
metrics:
- type: mean_reward
value: 378.38 +/- 92.34
name: mean_reward
verified: false
---
# **A2C** Agent playing **Humanoid-v3**
This is a trained model of a **A2C** agent playing **Humanoid-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env Humanoid-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env Humanoid-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env Humanoid-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env Humanoid-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('n_timesteps', 2000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
qgallouedec/a2c-BipedalWalkerHardcore-v3-2508703001 | qgallouedec | 2023-02-28T16:07:51Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalkerHardcore-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T16:07:16Z | ---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: 122.28 +/- 111.59
name: mean_reward
verified: false
---
# **A2C** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **A2C** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.001),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.0008'),
('max_grad_norm', 0.5),
('n_envs', 32),
('n_steps', 8),
('n_timesteps', 200000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
unagui/ppo-Huggy | unagui | 2023-02-28T15:33:10Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-02-28T15:33:02Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: unagui/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
qgallouedec/tqc-Hopper-v3-1346000078 | qgallouedec | 2023-02-28T15:09:51Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"Hopper-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T15:09:30Z | ---
library_name: stable-baselines3
tags:
- Hopper-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v3
type: Hopper-v3
metrics:
- type: mean_reward
value: 3726.60 +/- 10.89
name: mean_reward
verified: false
---
# **TQC** Agent playing **Hopper-v3**
This is a trained model of a **TQC** agent playing **Hopper-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Hopper-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Hopper-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Hopper-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Hopper-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('top_quantiles_to_drop_per_net', 5),
('normalize', False)])
```
|
qgallouedec/tqc-FetchSlide-v1-1365846529 | qgallouedec | 2023-02-28T15:09:20Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"FetchSlide-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T15:09:03Z | ---
library_name: stable-baselines3
tags:
- FetchSlide-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FetchSlide-v1
type: FetchSlide-v1
metrics:
- type: mean_reward
value: -22.70 +/- 6.15
name: mean_reward
verified: false
---
# **TQC** Agent playing **FetchSlide-v1**
This is a trained model of a **TQC** agent playing **FetchSlide-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env FetchSlide-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env FetchSlide-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env FetchSlide-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env FetchSlide-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env FetchSlide-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env FetchSlide-v1 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 2048),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.95),
('learning_rate', 0.001),
('n_timesteps', 3000000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='future', "
'n_sampled_goal=4, )'),
('tau', 0.05),
('normalize', False)])
```
|
Lilya/distilbert-base-uncased-ner-invoiceSenderRecipient_clean_inv_28_02 | Lilya | 2023-02-28T15:08:32Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-28T07:16:16Z | ---
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-ner-invoiceSenderRecipient_clean_inv_28_02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-ner-invoiceSenderRecipient_clean_inv_28_02
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0266
- eval_precision: 0.9595
- eval_recall: 0.9642
- eval_f1: 0.9618
- eval_accuracy: 0.9957
- eval_runtime: 60.7498
- eval_samples_per_second: 271.474
- eval_steps_per_second: 16.971
- epoch: 9.98
- step: 58000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.13.1
- Datasets 2.3.2
- Tokenizers 0.10.3
|
qgallouedec/tqc-Humanoid-v3-1772834236 | qgallouedec | 2023-02-28T15:08:10Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"Humanoid-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T15:07:46Z | ---
library_name: stable-baselines3
tags:
- Humanoid-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v3
type: Humanoid-v3
metrics:
- type: mean_reward
value: 7623.27 +/- 70.86
name: mean_reward
verified: false
---
# **TQC** Agent playing **Humanoid-v3**
This is a trained model of a **TQC** agent playing **Humanoid-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Humanoid-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Humanoid-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Humanoid-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Humanoid-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 2000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
qgallouedec/tqc-Ant-v3-372483154 | qgallouedec | 2023-02-28T15:04:09Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"Ant-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T15:03:47Z | ---
library_name: stable-baselines3
tags:
- Ant-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v3
type: Ant-v3
metrics:
- type: mean_reward
value: 3637.21 +/- 1959.88
name: mean_reward
verified: false
---
# **TQC** Agent playing **Ant-v3**
This is a trained model of a **TQC** agent playing **Ant-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Ant-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Ant-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Ant-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Ant-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
qgallouedec/tqc-Humanoid-v3-1148850933 | qgallouedec | 2023-02-28T15:01:17Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"Humanoid-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T15:01:00Z | ---
library_name: stable-baselines3
tags:
- Humanoid-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v3
type: Humanoid-v3
metrics:
- type: mean_reward
value: 1274.08 +/- 302.39
name: mean_reward
verified: false
---
# **TQC** Agent playing **Humanoid-v3**
This is a trained model of a **TQC** agent playing **Humanoid-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Humanoid-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Humanoid-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Humanoid-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Humanoid-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 2000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
qgallouedec/tqc-Ant-v3-2084207633 | qgallouedec | 2023-02-28T14:59:53Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"Ant-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:59:34Z | ---
library_name: stable-baselines3
tags:
- Ant-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v3
type: Ant-v3
metrics:
- type: mean_reward
value: 1988.37 +/- 1577.84
name: mean_reward
verified: false
---
# **TQC** Agent playing **Ant-v3**
This is a trained model of a **TQC** agent playing **Ant-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Ant-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Ant-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Ant-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Ant-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
qgallouedec/tqc-Humanoid-v3-2077901749 | qgallouedec | 2023-02-28T14:58:59Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"Humanoid-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:58:35Z | ---
library_name: stable-baselines3
tags:
- Humanoid-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v3
type: Humanoid-v3
metrics:
- type: mean_reward
value: 7084.21 +/- 1923.61
name: mean_reward
verified: false
---
# **TQC** Agent playing **Humanoid-v3**
This is a trained model of a **TQC** agent playing **Humanoid-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Humanoid-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Humanoid-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Humanoid-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Humanoid-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 2000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
qgallouedec/tqc-Hopper-v3-2496077244 | qgallouedec | 2023-02-28T14:54:58Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"Hopper-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:54:36Z | ---
library_name: stable-baselines3
tags:
- Hopper-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v3
type: Hopper-v3
metrics:
- type: mean_reward
value: 3644.20 +/- 5.10
name: mean_reward
verified: false
---
# **TQC** Agent playing **Hopper-v3**
This is a trained model of a **TQC** agent playing **Hopper-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Hopper-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Hopper-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Hopper-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Hopper-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('top_quantiles_to_drop_per_net', 5),
('normalize', False)])
```
|
qgallouedec/tqc-FetchPush-v1-702808983 | qgallouedec | 2023-02-28T14:52:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"FetchPush-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:52:25Z | ---
library_name: stable-baselines3
tags:
- FetchPush-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FetchPush-v1
type: FetchPush-v1
metrics:
- type: mean_reward
value: -11.00 +/- 5.16
name: mean_reward
verified: false
---
# **TQC** Agent playing **FetchPush-v1**
This is a trained model of a **TQC** agent playing **FetchPush-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env FetchPush-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env FetchPush-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env FetchPush-v1 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env FetchPush-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env FetchPush-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env FetchPush-v1 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 2048),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.95),
('learning_rate', 0.001),
('n_timesteps', 1000000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='future', "
'n_sampled_goal=4, )'),
('tau', 0.05),
('normalize', False)])
```
|
qgallouedec/tqc-Ant-v3-1902130014 | qgallouedec | 2023-02-28T14:52:17Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"Ant-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:51:54Z | ---
library_name: stable-baselines3
tags:
- Ant-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v3
type: Ant-v3
metrics:
- type: mean_reward
value: 5330.35 +/- 1091.37
name: mean_reward
verified: false
---
# **TQC** Agent playing **Ant-v3**
This is a trained model of a **TQC** agent playing **Ant-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Ant-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Ant-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Ant-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Ant-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
qgallouedec/tqc-parking-v0-768894194 | qgallouedec | 2023-02-28T14:51:44Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"parking-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:51:01Z | ---
library_name: stable-baselines3
tags:
- parking-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: parking-v0
type: parking-v0
metrics:
- type: mean_reward
value: -10.30 +/- 5.90
name: mean_reward
verified: false
---
# **TQC** Agent playing **parking-v0**
This is a trained model of a **TQC** agent playing **parking-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env parking-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env parking-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env parking-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env parking-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env parking-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env parking-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('learning_rate', 0.0015),
('n_timesteps', 50000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='episode', "
'n_sampled_goal=4, max_episode_length=100 )'),
('tau', 0.005),
('normalize', False)])
```
|
qgallouedec/tqc-Hopper-v3-2631554861 | qgallouedec | 2023-02-28T14:50:53Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"Hopper-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:50:34Z | ---
library_name: stable-baselines3
tags:
- Hopper-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v3
type: Hopper-v3
metrics:
- type: mean_reward
value: 2000.75 +/- 885.25
name: mean_reward
verified: false
---
# **TQC** Agent playing **Hopper-v3**
This is a trained model of a **TQC** agent playing **Hopper-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Hopper-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Hopper-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Hopper-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Hopper-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('top_quantiles_to_drop_per_net', 5),
('normalize', False)])
```
|
qgallouedec/tqc-Hopper-v3-1489988575 | qgallouedec | 2023-02-28T14:50:26Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"Hopper-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:50:05Z | ---
library_name: stable-baselines3
tags:
- Hopper-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v3
type: Hopper-v3
metrics:
- type: mean_reward
value: 3318.43 +/- 590.11
name: mean_reward
verified: false
---
# **TQC** Agent playing **Hopper-v3**
This is a trained model of a **TQC** agent playing **Hopper-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Hopper-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Hopper-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Hopper-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Hopper-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('top_quantiles_to_drop_per_net', 5),
('normalize', False)])
```
|
qgallouedec/tqc-Hopper-v3-4011682269 | qgallouedec | 2023-02-28T14:48:02Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"Hopper-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:47:41Z | ---
library_name: stable-baselines3
tags:
- Hopper-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v3
type: Hopper-v3
metrics:
- type: mean_reward
value: 3417.97 +/- 1.50
name: mean_reward
verified: false
---
# **TQC** Agent playing **Hopper-v3**
This is a trained model of a **TQC** agent playing **Hopper-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Hopper-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Hopper-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Hopper-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Hopper-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('top_quantiles_to_drop_per_net', 5),
('normalize', False)])
```
|
qgallouedec/tqc-parking-v0-4204328955 | qgallouedec | 2023-02-28T14:47:31Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"parking-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:46:48Z | ---
library_name: stable-baselines3
tags:
- parking-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: parking-v0
type: parking-v0
metrics:
- type: mean_reward
value: -9.33 +/- 4.81
name: mean_reward
verified: false
---
# **TQC** Agent playing **parking-v0**
This is a trained model of a **TQC** agent playing **parking-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env parking-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env parking-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env parking-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env parking-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env parking-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env parking-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('learning_rate', 0.0015),
('n_timesteps', 50000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='episode', "
'n_sampled_goal=4, max_episode_length=100 )'),
('tau', 0.005),
('normalize', False)])
```
|
qgallouedec/tqc-parking-v0-1067225822 | qgallouedec | 2023-02-28T14:46:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"parking-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:45:55Z | ---
library_name: stable-baselines3
tags:
- parking-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: parking-v0
type: parking-v0
metrics:
- type: mean_reward
value: -12.54 +/- 12.02
name: mean_reward
verified: false
---
# **TQC** Agent playing **parking-v0**
This is a trained model of a **TQC** agent playing **parking-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env parking-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env parking-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env parking-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env parking-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env parking-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env parking-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('learning_rate', 0.0015),
('n_timesteps', 50000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='episode', "
'n_sampled_goal=4, max_episode_length=100 )'),
('tau', 0.005),
('normalize', False)])
```
|
EcoCy/jultest | EcoCy | 2023-02-28T14:28:39Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-02-28T14:28:35Z | ---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: jultest01
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - jultest
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "jultest01" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: jultest01




|
sarthakc44/Reinforce-Pixelcopter-PLE-v1 | sarthakc44 | 2023-02-28T14:27:00Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:26:57Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 24.90 +/- 22.93
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
giobin/SnowballTarget1 | giobin | 2023-02-28T14:24:04Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-28T14:23:59Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: giobin/SnowballTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
aherzberg/ser_model_fixed_label | aherzberg | 2023-02-28T14:19:53Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-02-28T11:20:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ser_model_fixed_label
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ser_model_fixed_label
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7010
- Accuracy: 0.8367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7645 | 0.96 | 18 | 1.5899 | 0.4333 |
| 1.5148 | 1.96 | 36 | 1.4152 | 0.4433 |
| 1.3042 | 2.96 | 54 | 1.1857 | 0.5767 |
| 1.1184 | 3.96 | 72 | 1.0508 | 0.62 |
| 0.9588 | 4.96 | 90 | 0.9329 | 0.7 |
| 0.9789 | 5.96 | 108 | 0.8638 | 0.74 |
| 0.7835 | 6.96 | 126 | 0.7730 | 0.8133 |
| 0.7259 | 7.96 | 144 | 0.7355 | 0.83 |
| 0.6783 | 8.96 | 162 | 0.7190 | 0.8333 |
| 0.6644 | 9.96 | 180 | 0.7010 | 0.8367 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ybelkada/gpt-j-6b-detoxified-20shdl | ybelkada | 2023-02-28T14:13:40Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-17T15:37:10Z | # Model card for detoxified gpt-j-6b
Model run can be found [here](https://wandb.ai/distill-bloom/trl/runs/kw15qua9?workspace=user-younesbelkada)
The main difference is that I used `mini_batch_size=1` |
schreon/gpt2large-lhm-06 | schreon | 2023-02-28T14:03:44Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:training_corpus",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-25T19:04:19Z | ---
tags:
- generated_from_trainer
datasets:
- training_corpus
model-index:
- name: gpt2large-lhm-06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2large-lhm-06
This model was trained from scratch on the training_corpus dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00018
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 200
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
akoshel/Reinforce-Cartpole-v1 | akoshel | 2023-02-28T14:01:09Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-17T07:43:47Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Isaacgv/q-FrozenLake-v1-4x4-noSlippery | Isaacgv | 2023-02-28T13:58:25Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T13:58:22Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Isaacgv/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Qilex/a2c-PandaReachDense-v2 | Qilex | 2023-02-28T13:55:18Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-27T21:32:37Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.41 +/- 0.46
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
UnstableCreatures/Test | UnstableCreatures | 2023-02-28T13:49:48Z | 0 | 0 | null | [
"text-to-image",
"en",
"region:us"
]
| text-to-image | 2023-02-28T13:05:59Z | ---
language:
- en
pipeline_tag: text-to-image
--- |
johnowhitaker/pyramid_noise_test_600steps_08discount | johnowhitaker | 2023-02-28T13:41:52Z | 4 | 9 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"multires_noise",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-28T13:03:48Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- multires_noise
inference: true
---
A model trained with Pyramid Noise - see https://wandb.ai/johnowhitaker/multires_noise/reports/Multi-Resolution-Noise-for-Diffusion-Model-Training--VmlldzozNjYyOTU2 for details
```python
from torch import nn
import random
def pyramid_noise_like(x, discount=0.8):
b, c, w, h = x.shape
u = nn.Upsample(size=(w, h), mode='bilinear')
noise = torch.randn_like(x)
for i in range(6):
r = random.random()*2+2 # Rather than always going 2x,
w, h = max(1, int(w/(r**i))), max(1, int(h/(r**i)))
noise += u(torch.randn(b, c, w, h).to(x)) * discount**i
if w==1 or h==1: break
return noise / noise.std() # Scale back to unit variance
```
To use the mode for inference, just load it like a normal stable diffusion pipeline:
```python
from diffusers import StableDiffusionPipeline
model_path = "johnowhitaker/pyramid_noise_test_600steps_08discount"
pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16)
pipe.to("cuda")
image = pipe(prompt="A black image").images[0]
image
``` |
zambezivoice/xls-r-300m-zv-mul | zambezivoice | 2023-02-28T13:33:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-28T05:20:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: xls-r-300m-zv-mul
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-zv-mul
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4882
- Wer: 0.4859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7438 | 0.26 | 500 | 0.9380 | 0.9135 |
| 1.1094 | 0.52 | 1000 | 0.5399 | 0.6874 |
| 0.9203 | 0.79 | 1500 | 0.5056 | 0.6708 |
| 0.8439 | 1.05 | 2000 | 0.4501 | 0.5775 |
| 0.7871 | 1.31 | 2500 | 0.4231 | 0.5592 |
| 0.761 | 1.57 | 3000 | 0.4335 | 0.5469 |
| 0.7309 | 1.83 | 3500 | 0.4204 | 0.5407 |
| 0.706 | 2.1 | 4000 | 0.4009 | 0.5177 |
| 0.6816 | 2.36 | 4500 | 0.3866 | 0.5108 |
| 0.6639 | 2.62 | 5000 | 0.3786 | 0.4895 |
| 0.6532 | 2.88 | 5500 | 0.3791 | 0.4895 |
| 0.6347 | 3.14 | 6000 | 0.3681 | 0.4740 |
| 0.6062 | 3.4 | 6500 | 0.3513 | 0.4695 |
| 0.5976 | 3.67 | 7000 | 0.3654 | 0.4779 |
| 0.5885 | 3.93 | 7500 | 0.3441 | 0.4552 |
| 0.5791 | 4.19 | 8000 | 0.3821 | 0.4610 |
| 0.6671 | 4.45 | 8500 | 0.4708 | 0.4981 |
| 0.6961 | 4.71 | 9000 | 0.4882 | 0.4859 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Yagorka/ddpm-pokemons-128_300_epochs_1000_steps_final_Cont | Yagorka | 2023-02-28T13:29:06Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2023-02-28T07:25:37Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-pokemons-128_300_epochs_1000_steps_final_Cont
## Model description
This diffusion model is trained with the [π€ Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 11
- eval_batch_size: 12
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
π [TensorBoard logs](https://huggingface.co/Yagorka/ddpm-pokemons-128_300_epochs_1000_steps_final_Cont/tensorboard?#scalars)
|
csebuetnlp/mT5_m2m_crossSum | csebuetnlp | 2023-02-28T13:23:28Z | 40 | 8 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"mT5",
"am",
"ar",
"az",
"bn",
"my",
"zh",
"en",
"fr",
"gu",
"ha",
"hi",
"ig",
"id",
"ja",
"rn",
"ko",
"ky",
"mr",
"ne",
"om",
"ps",
"fa",
"pcm",
"pt",
"pa",
"ru",
"gd",
"sr",
"si",
"so",
"es",
"sw",
"ta",
"te",
"th",
"ti",
"tr",
"uk",
"ur",
"uz",
"vi",
"cy",
"yo",
"dataset:csebuetnlp/CrossSum",
"arxiv:2112.08804",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-04-20T15:11:49Z | ---
tags:
- summarization
- mT5
language:
- am
- ar
- az
- bn
- my
- zh
- en
- fr
- gu
- ha
- hi
- ig
- id
- ja
- rn
- ko
- ky
- mr
- ne
- om
- ps
- fa
- pcm
- pt
- pa
- ru
- gd
- sr
- si
- so
- es
- sw
- ta
- te
- th
- ti
- tr
- uk
- ur
- uz
- vi
- cy
- yo
licenses:
- cc-by-nc-sa-4.0
widget:
- text: >-
Videos that say approved vaccines are dangerous and cause autism, cancer or
infertility are among those that will be taken down, the company said. The
policy includes the termination of accounts of anti-vaccine influencers.
Tech giants have been criticised for not doing more to counter false health
information on their sites. In July, US President Joe Biden said social
media platforms were largely responsible for people's scepticism in getting
vaccinated by spreading misinformation, and appealed for them to address the
issue. YouTube, which is owned by Google, said 130,000 videos were removed
from its platform since last year, when it implemented a ban on content
spreading misinformation about Covid vaccines. In a blog post, the company
said it had seen false claims about Covid jabs "spill over into
misinformation about vaccines in general". The new policy covers
long-approved vaccines, such as those against measles or hepatitis B.
"We're expanding our medical misinformation policies on YouTube with new
guidelines on currently administered vaccines that are approved and
confirmed to be safe and effective by local health authorities and the WHO,"
the post said, referring to the World Health Organization.
datasets:
- csebuetnlp/CrossSum
---
# mT5-m2m-CrossSum
This repository contains the many-to-many (m2m) mT5 checkpoint finetuned on all cross-lingual pairs of the [CrossSum](https://huggingface.co/datasets/csebuetnlp/CrossSum) dataset. This model tries to **summarize text written in any language in the provided target language.** For finetuning details and scripts, see the [paper](https://arxiv.org/abs/2112.08804) and the [official repository](https://github.com/csebuetnlp/CrossSum).
## Using this model in `transformers` (tested on 4.11.0.dev0)
```python
import re
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization."""
model_name = "csebuetnlp/mT5_m2m_crossSum"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
get_lang_id = lambda lang: tokenizer._convert_token_to_id(
model.config.task_specific_params["langid_map"][lang][1]
)
target_lang = "english" # for a list of available language names see below
input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
decoder_start_token_id=get_lang_id(target_lang),
max_length=84,
no_repeat_ngram_size=2,
num_beams=4,
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
```
### Available target language names
- `amharic`
- `arabic`
- `azerbaijani`
- `bengali`
- `burmese`
- `chinese_simplified`
- `chinese_traditional`
- `english`
- `french`
- `gujarati`
- `hausa`
- `hindi`
- `igbo`
- `indonesian`
- `japanese`
- `kirundi`
- `korean`
- `kyrgyz`
- `marathi`
- `nepali`
- `oromo`
- `pashto`
- `persian`
- `pidgin`
- `portuguese`
- `punjabi`
- `russian`
- `scottish_gaelic`
- `serbian_cyrillic`
- `serbian_latin`
- `sinhala`
- `somali`
- `spanish`
- `swahili`
- `tamil`
- `telugu`
- `thai`
- `tigrinya`
- `turkish`
- `ukrainian`
- `urdu`
- `uzbek`
- `vietnamese`
- `welsh`
- `yoruba`
## Citation
If you use this model, please cite the following paper:
```
@article{hasan2021crosssum,
author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar},
title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs},
journal = {CoRR},
volume = {abs/2112.08804},
year = {2021},
url = {https://arxiv.org/abs/2112.08804},
eprinttype = {arXiv},
eprint = {2112.08804}
}
``` |
akoshel/q-CartPole-v1 | akoshel | 2023-02-28T13:22:36Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-28T13:22:33Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="akoshel/q-CartPole-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits