modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 00:44:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 00:44:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
shuojiang/PPO-LunarLander-v2-Tuned | shuojiang | 2022-10-11T19:00:34Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-10-11T19:00:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 272.92 +/- 19.82
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
microsoft/bloom-deepspeed-inference-fp16 | microsoft | 2022-10-11T18:28:26Z | 13 | 12 | transformers | [
"transformers",
"bloom",
"feature-extraction",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-08-17T21:01:05Z | ---
license: bigscience-bloom-rail-1.0
---
This is a copy of the original [BLOOM weights](https://huggingface.co/bigscience/bloom) that is more efficient to use with the [DeepSpeed-MII](https://github.com/microsoft/deepspeed-mii) and [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/). In this repo the original tensors are split into 8 shards to target 8 GPUs, this allows the user to run the model with DeepSpeed-inference Tensor Parallelism.
For specific details about the BLOOM model itself, please see the [original BLOOM model card](https://huggingface.co/bigscience/bloom).
For examples on using this repo please see the following:
* https://github.com/huggingface/transformers-bloom-inference
* https://github.com/microsoft/DeepSpeed-MII
|
sd-concepts-library/nard-style | sd-concepts-library | 2022-10-11T18:23:38Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-11T18:23:27Z | ---
license: mit
---
### Nard Style on Stable Diffusion
This is the `<nard>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
stevhliu/my_awesome_billsum_model | stevhliu | 2022-10-11T18:23:16Z | 699 | 5 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-11T18:04:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.176
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4290
- Rouge1: 0.176
- Rouge2: 0.0773
- Rougel: 0.1454
- Rougelsum: 0.1455
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.5195 | 0.1478 | 0.0528 | 0.1197 | 0.1194 | 19.0 |
| No log | 2.0 | 124 | 2.4660 | 0.1572 | 0.06 | 0.1288 | 0.1287 | 19.0 |
| No log | 3.0 | 186 | 2.4366 | 0.1691 | 0.0719 | 0.1394 | 0.1396 | 19.0 |
| No log | 4.0 | 248 | 2.4290 | 0.176 | 0.0773 | 0.1454 | 0.1455 | 19.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
nvidia/nemo-megatron-t5-3B | nvidia | 2022-10-11T17:45:19Z | 25 | 9 | nemo | [
"nemo",
"pytorch",
"seq2seq",
"masked language modeling",
"en",
"dataset:the_pile",
"arxiv:1910.10683",
"arxiv:1909.08053",
"arxiv:2101.00027",
"license:cc-by-4.0",
"region:us"
]
| null | 2022-09-20T20:57:01Z | ---
language:
- en
library_name: nemo
datasets:
- the_pile
tags:
- pytorch
- seq2seq
- masked language modeling
license: cc-by-4.0
---
# NeMo Megatron-T5 3B
<style>
img {
display: inline;
}
</style>
|[](#model-architecture)|[](#model-architecture)|[](#datasets)
## Model Description
NeMo Megatron-T5 3B is a transformer-based masked language model. [T5](https://arxiv.org/abs/1910.10683) [1] is a class of encoder-decoder models trained with a span-based masked language modeling objective. We follow the [T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1) approach of pre-training using only the masked language modeling objective. It has Tensor Parallelism (TP) of 2, Pipeline Parallelism (PP) of 1 and should fit on a single NVIDIA GPU for inference and 2 A100 80G GPUs for finetuning.
This model was trained with [NeMo Megatron](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html).
## Getting started
### Step 1: Install NeMo and dependencies
You will need to install NVIDIA Apex and NeMo.
```
git clone https://github.com/ericharper/apex.git
cd apex
git checkout nm_v1.11.0
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
```
```
pip install nemo_toolkit['nlp']==1.11.0
```
Alternatively, you can use NeMo Megatron training docker container with all dependencies pre-installed - [https://developer.nvidia.com/nemo-megatron-open-beta?nvid=nv-int-tblg-249896](https://developer.nvidia.com/nemo-megatron-open-beta)
### Step 2: Run inference
**Note.** The model has been trained with Tensor Parallelism (TP) of 2 and Pipeline Parallelism (PP) of 1, but it should be possible to run inference with tensor parallel size 1 on most NVIDIA GPUs
```
git clone https://github.com/NVIDIA/NeMo.git
cd NeMo/examples/nlp/language_modeling
git checkout v1.11.0
python megatron_t5_eval.py \
--model_file /raid/Data/NMT/Models/t5_3b/nemo_megatron_t5_3b_bf16_tp2.nemo \
--prompt '<mask> was the first person to set foot on the moon. When he did, he uttered the phrase <mask> for man, one <mask> for mankind which is still a popular quote today.' \
--tensor_model_parallel_size 2
```
The script will automatically replace all \<mask\> tokens with the appropriate sentinel tokens used while pre-training and attempt to fill them in autoregressively with greedy decoding.
*Expected Response*:
```
{
'prompt': '<mask> was the first person to set foot on the moon. When he did, he uttered the phrase <mask> for man, one <mask> for mankind which is still a popular quote today.',
'completion':
{
'text': '[CLS] <extra_id_0> Neil Armstrong <extra_id_1> one small step <extra_id_2> giant leap',
'tokens': [(101, '[CLS]', -2.9802276912960224e-06), (28996, '<extra_id_0>', -0.1492447555065155), (6003, 'Neil', -0.0015669699059799314), (8800, 'Armstrong', -0.013404252007603645), (28997, '<extra_id_1>', -0.9019092917442322), (1141, 'one', -0.7962003350257874), (1353, 'small', -0.006306509021669626), (2585, 'step', -1.9073468138230965e-06), (28998, '<extra_id_2>', -0.0026884861290454865), (4994, 'giant', -0.1679367572069168), (13660, 'leap', -5.960462772236497e-07)]
},
'masked_input': '<extra_id_0> was the first person to set foot on the moon . When he did , he uttered the phrase <extra_id_1> for man , one <extra_id_2> for mankind which is still a popular quote today .'
}
```
- prompt: The provided raw prompt as input
- completion:
- text: The final generated text from the model along with special/sentinel tokens besides \</s\>
- tokens: Each individual subword that is generated along with its log-probability.
- masked_input: The original raw prompt with <mask> replaced with appropriate sentinel tokens.
## Training Data
The model was trained on ["The Pile" dataset prepared by Eleuther.AI](https://pile.eleuther.ai/). [4]
## Evaluation results
*Fine-tuned Performance* on downstream *validation* sets for different tasks
| MNLI-M | MNLI-MM | SST-2 | STS-B (Spearman) |
| -------| --------| ------| -----------------|
| 90.62 | 90.61 | 97.2 | 91.5 |
## Limitations
The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
## References
[1] [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683)
[2] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
[4] [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
BigSalmon/FormalInformalConcise-FIM-NeoX-1.3B | BigSalmon | 2022-10-11T17:30:17Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-11T15:58:04Z | data: https://github.com/BigSalmon2/InformalToFormalDataset
Text Generation Informal Formal
Trained on this model: https://huggingface.co/CarperAI/FIM-NeoX-1.3B, which is geared toward filling in the blank. Check out their model and give them a like!
```
from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast
tokenizer = GPTNeoXTokenizerFast.from_pretrained("CarperAI/FIM-NeoX-1.3B")
model = GPTNeoXForCausalLM.from_pretrained("BigSalmon/FormalInformalConcise-FIM-NeoX-1.3B")
```
To load model, you may need to do:
```
pip install git+https://github.com/huggingface/transformers
```
```
Demo:
https://huggingface.co/spaces/BigSalmon/GPT2Mask
```
```
prompt = """<|SUF|> into relaxation <|PRE|> music before bedtime <|MID|>"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
How To Make Prompts:
Infill Phrase Masking In-Fill
```
<|SUF|> into relaxation <|PRE|> music before bedtime <|MID|>
```
Informal To Formal
```
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
```
Essay Intro (National Parks):
text: tourists are at ease in the national parks, ( swept up in the beauty of their natural splendor ).
***
Essay Intro (D.C. Statehood):
washington, d.c. is a city of outsize significance, ( ground zero for the nation's political life / center stage for the nation's political machinations ).
```
```
topic: the Golden State Warriors.
characterization 1: the reigning kings of the NBA.
characterization 2: possessed of a remarkable cohesion.
characterization 3: helmed by superstar Stephen Curry.
characterization 4: perched atop the league’s hierarchy.
characterization 5: boasting a litany of hall-of-famers.
***
topic: emojis.
characterization 1: shorthand for a digital generation.
characterization 2: more versatile than words.
characterization 3: the latest frontier in language.
characterization 4: a form of self-expression.
characterization 5: quintessentially millennial.
characterization 6: reflective of a tech-centric world.
***
topic:
```
```
regular: illinois went against the census' population-loss prediction by getting more residents.
VBG: defying the census' prediction of population loss, illinois experienced growth.
***
regular: microsoft word’s high pricing increases the likelihood of competition.
VBG: extortionately priced, microsoft word is inviting competition.
***
regular:
```
```
source: badminton should be more popular in the US.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) games played with racquets are popular, (2) just look at tennis and ping pong, (3) but badminton underappreciated, (4) fun, fast-paced, competitive, (5) needs to be marketed more
text: the sporting arena is dominated by games that are played with racquets. tennis and ping pong, in particular, are immensely popular. somewhat curiously, however, badminton is absent from this pantheon. exciting, fast-paced, and competitive, it is an underappreciated pastime. all that it lacks is more effective marketing.
***
source: movies in theaters should be free.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) movies provide vital life lessons, (2) many venues charge admission, (3) those without much money
text: the lessons that movies impart are far from trivial. the vast catalogue of cinematic classics is replete with inspiring sagas of friendship, bravery, and tenacity. it is regrettable, then, that admission to theaters is not free. in their current form, the doors of this most vital of institutions are closed to those who lack the means to pay.
***
source:
```
```
in the private sector, { transparency } is vital to the business’s credibility. the { disclosure of information } can be the difference between success and failure.
***
the labor market is changing, with { remote work } now the norm. this { flexible employment } allows the individual to design their own schedule.
***
the { cubicle } is the locus of countless grievances. many complain that the { enclosed workspace } restricts their freedom of movement.
***
```
```
it would be natural to assume that americans, as a people whose ancestors { immigrated to this country }, would be sympathetic to those seeking to do likewise.
question: what does “do likewise” mean in the above context?
(a) make the same journey
(b) share in the promise of the american dream
(c) start anew in the land of opportunity
(d) make landfall on the united states
***
in the private sector, { transparency } is vital to the business’s credibility. this orientation can be the difference between success and failure.
question: what does “this orientation” mean in the above context?
(a) visible business practices
(b) candor with the public
(c) open, honest communication
(d) culture of accountability
```
```
example: suppose you are a teacher. further suppose you want to tell an accurate telling of history. then suppose a parent takes offense. they do so in the name of name of their kid. this happens a lot.
text: educators' responsibility to remain true to the historical record often clashes with the parent's desire to shelter their child from uncomfortable realities.
***
example: suppose you are a student at college. now suppose you have to buy textbooks. that is going to be worth hundreds of dollars. given how much you already spend on tuition, that is going to hard cost to bear.
text: the exorbitant cost of textbooks, which often reaches hundreds of dollars, imposes a sizable financial burden on the already-strapped college student.
```
```
accustomed to having its name uttered ______, harvard university is weathering a rare spell of reputational tumult
(a) in reverential tones
(b) with great affection
(c) in adulatory fashion
(d) in glowing terms
```
```
clarify: international ( {working together} / cooperation ) is called for when ( {issue go beyond lots of borders} / an issue transcends borders / a given matter has transnational implications ).
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
```
music before bedtime [makes for being able to relax] -> is a recipe for relaxation.
```
```
[people wanting entertainment love traveling new york city] -> travelers flock to new york city in droves, drawn to its iconic entertainment scene. [cannot blame them] -> one cannot fault them [broadway so fun] -> when it is home to such thrilling fare as Broadway.
```
```
in their ( ‖ when you are rushing because you want to get there on time ‖ / haste to arrive punctually / mad dash to be timely ), morning commuters are too rushed to whip up their own meal.
***
politicians prefer to author vague plans rather than ( ‖ when you can make a plan without many unknowns ‖ / actionable policies / concrete solutions ).
``` |
BigSalmon/InformalToFormalLincoln83Paraphrase | BigSalmon | 2022-10-11T17:28:43Z | 201 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-06T22:04:11Z | data: https://github.com/BigSalmon2/InformalToFormalDataset
Text Generation Informal Formal
CHECK OUT THIS MODEL: BigSalmon/FormalInformalConcise-FIM-NeoX-1.3B
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln82Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln82Paraphrase")
```
```
Demo:
https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy
```
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
```
Infill / Infilling / Masking / Phrase Masking (Works pretty decently actually, especially when you use logprobs code from above):
```
his contention [blank] by the evidence [sep] was refuted [answer]
***
few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer]
***
when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer]
***
the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer]
***
the joy of sport is that no two games are alike. for every exhilarating experience, however, there is an interminable one. the national pastime, unfortunately, has a penchant for the latter. what begins as a summer evening at the ballpark can quickly devolve into a game of tedium. the primary culprit is the [blank] of play. from batters readjusting their gloves to fielders spitting on their mitts, the action is [blank] unnecessary interruptions. the sport's future is [blank] if these tendencies are not addressed [sep] plodding pace [answer] riddled with [answer] bleak [answer]
***
microsoft word's [blank] pricing [blank] competition [sep] unconscionable [answer] invites [answer]
***
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
Backwards
```
Essay Intro (National Parks):
text: tourists are at ease in the national parks, ( swept up in the beauty of their natural splendor ).
***
Essay Intro (D.C. Statehood):
washington, d.c. is a city of outsize significance, ( ground zero for the nation's political life / center stage for the nation's political machinations ).
```
```
topic: the Golden State Warriors.
characterization 1: the reigning kings of the NBA.
characterization 2: possessed of a remarkable cohesion.
characterization 3: helmed by superstar Stephen Curry.
characterization 4: perched atop the league’s hierarchy.
characterization 5: boasting a litany of hall-of-famers.
***
topic: emojis.
characterization 1: shorthand for a digital generation.
characterization 2: more versatile than words.
characterization 3: the latest frontier in language.
characterization 4: a form of self-expression.
characterization 5: quintessentially millennial.
characterization 6: reflective of a tech-centric world.
***
topic:
```
```
regular: illinois went against the census' population-loss prediction by getting more residents.
VBG: defying the census' prediction of population loss, illinois experienced growth.
***
regular: microsoft word’s high pricing increases the likelihood of competition.
VBG: extortionately priced, microsoft word is inviting competition.
***
regular:
```
```
source: badminton should be more popular in the US.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) games played with racquets are popular, (2) just look at tennis and ping pong, (3) but badminton underappreciated, (4) fun, fast-paced, competitive, (5) needs to be marketed more
text: the sporting arena is dominated by games that are played with racquets. tennis and ping pong, in particular, are immensely popular. somewhat curiously, however, badminton is absent from this pantheon. exciting, fast-paced, and competitive, it is an underappreciated pastime. all that it lacks is more effective marketing.
***
source: movies in theaters should be free.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) movies provide vital life lessons, (2) many venues charge admission, (3) those without much money
text: the lessons that movies impart are far from trivial. the vast catalogue of cinematic classics is replete with inspiring sagas of friendship, bravery, and tenacity. it is regrettable, then, that admission to theaters is not free. in their current form, the doors of this most vital of institutions are closed to those who lack the means to pay.
***
source:
```
```
in the private sector, { transparency } is vital to the business’s credibility. the { disclosure of information } can be the difference between success and failure.
***
the labor market is changing, with { remote work } now the norm. this { flexible employment } allows the individual to design their own schedule.
***
the { cubicle } is the locus of countless grievances. many complain that the { enclosed workspace } restricts their freedom of movement.
***
```
```
it would be natural to assume that americans, as a people whose ancestors { immigrated to this country }, would be sympathetic to those seeking to do likewise.
question: what does “do likewise” mean in the above context?
(a) make the same journey
(b) share in the promise of the american dream
(c) start anew in the land of opportunity
(d) make landfall on the united states
***
in the private sector, { transparency } is vital to the business’s credibility. this orientation can be the difference between success and failure.
question: what does “this orientation” mean in the above context?
(a) visible business practices
(b) candor with the public
(c) open, honest communication
(d) culture of accountability
```
```
example: suppose you are a teacher. further suppose you want to tell an accurate telling of history. then suppose a parent takes offense. they do so in the name of name of their kid. this happens a lot.
text: educators' responsibility to remain true to the historical record often clashes with the parent's desire to shelter their child from uncomfortable realities.
***
example: suppose you are a student at college. now suppose you have to buy textbooks. that is going to be worth hundreds of dollars. given how much you already spend on tuition, that is going to hard cost to bear.
text: the exorbitant cost of textbooks, which often reaches hundreds of dollars, imposes a sizable financial burden on the already-strapped college student.
```
```
<Prefix> the atlanta hawks may attribute <Prefix> <Suffix> trae young <Suffix> <Middle> their robust season to <Middle>
***
<Prefix> the nobel prize in literature <Prefix> <Suffix> honor <Suffix> <Middle> is a singularly prestigious <Middle>
```
```
accustomed to having its name uttered ______, harvard university is weathering a rare spell of reputational tumult
(a) in reverential tones
(b) with great affection
(c) in adulatory fashion
(d) in glowing terms
```
```
clarify: international ( {working together} / cooperation ) is called for when ( {issue go beyond lots of borders} / an issue transcends borders / a given matter has transnational implications ).
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
*Note* Of all the masking techniques, this one works the best.
```
<Prefix> the atlanta hawks may attribute <Prefix> <Suffix> trae young <Suffix> <Middle> their robust season to <Middle>
***
<Prefix> the nobel prize in literature <Prefix> <Suffix> honor <Suffix> <Middle> is a singularly prestigious <Middle>
```
```
essence: when someone's views are keeping within reasonable.
refine: the senator's voting record is ( moderate / centrist / pragmatic / balanced / fair-minded / even-handed ).
***
essence: when things are worked through in a petty way.
refine: the propensity of the u.s. congress to settle every dispute by way of ( mudslinging / bickering / demagoguery / name-calling / finger-pointing / vilification ) is appalling.
``` |
stevhliu/my_awesome_food_model | stevhliu | 2022-10-11T17:18:25Z | 219 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-10-11T16:54:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.916
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1671
- Accuracy: 0.916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7213 | 0.99 | 62 | 1.6647 | 0.885 |
| 1.2902 | 1.99 | 124 | 1.2744 | 0.918 |
| 1.1288 | 2.99 | 186 | 1.1671 | 0.916 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
Z21/distilbert-base-uncased-finetuned-cola | Z21 | 2022-10-11T17:13:24Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-11T15:22:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5512772054945002
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8076
- Matthews Correlation: 0.5513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5264 | 1.0 | 535 | 0.5380 | 0.4135 |
| 0.3486 | 2.0 | 1070 | 0.5007 | 0.4923 |
| 0.2404 | 3.0 | 1605 | 0.5373 | 0.5358 |
| 0.1757 | 4.0 | 2140 | 0.7435 | 0.5414 |
| 0.122 | 5.0 | 2675 | 0.8076 | 0.5513 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
sd-concepts-library/natasha-johnston | sd-concepts-library | 2022-10-11T16:00:26Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-10-11T16:00:15Z | ---
license: mit
---
### Natasha Johnston on Stable Diffusion
This is the `<natasha-johnston>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
araffin/tqc-donkey-minimonaco-track-v0 | araffin | 2022-10-11T15:28:09Z | 16 | 0 | stable-baselines3 | [
"stable-baselines3",
"donkey-minimonaco-track-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-05T18:26:20Z | ---
library_name: stable-baselines3
tags:
- donkey-minimonaco-track-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: 386.49 +/- 0.77
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: donkey-minimonaco-track-v0
type: donkey-minimonaco-track-v0
---
# **TQC** Agent playing **donkey-minimonaco-track-v0**
This is a trained model of a **TQC** agent playing **donkey-minimonaco-track-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Autoencoder: https://github.com/araffin/aae-train-donkeycar branch: `feat/race_june` <br/>
Gym env: https://github.com/araffin/gym-donkeycar-1 branch: `feat/race_june` <br/>
RL Zoo branch: `feat/gym-donkeycar`
**Pretrained autoencoder** can be downloaded here: https://github.com/araffin/aae-train-donkeycar/releases/download/live-twitch-2/ae-32_monaco.pkl
```
export AE_PATH=/path/to/ae-32_monaco.pkl
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env donkey-minimonaco-track-v0 -orga araffin -f logs/
python enjoy.py --algo tqc --env donkey-minimonaco-track-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env donkey-minimonaco-track-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env donkey-minimonaco-track-v0 -f logs/ -orga araffin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 200000),
('callback',
[{'rl_zoo3.callbacks.ParallelTrainCallback': {'gradient_steps': 200}},
'rl_zoo3.callbacks.LapTimeCallback']),
('ent_coef', 'auto'),
('env_wrapper',
[{'gym.wrappers.time_limit.TimeLimit': {'max_episode_steps': 10000}},
'ae.wrapper.AutoencoderWrapper',
{'rl_zoo3.wrappers.HistoryWrapper': {'horizon': 2}}]),
('gamma', 0.99),
('gradient_steps', 256),
('learning_rate', 0.00073),
('learning_starts', 500),
('n_timesteps', 2000000.0),
('normalize', "{'norm_obs': True, 'norm_reward': False}"),
('policy', 'MlpPolicy'),
('policy_kwargs',
'dict(log_std_init=-3, net_arch=[256, 256], n_critics=2, '
'use_expln=True)'),
('sde_sample_freq', 16),
('tau', 0.02),
('train_freq', 200),
('use_sde', True),
('use_sde_at_warmup', True),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
# Environment Arguments
```python
{'conf': {'cam_resolution': (120, 160, 3),
'car_config': {'body_rgb': (226, 112, 18),
'body_style': 'donkey',
'car_name': 'Toni',
'font_size': 40},
'frame_skip': 1,
'host': 'localhost',
'level': 'mini_monaco',
'log_level': 20,
'max_cte': 8,
'port': 9091,
'start_delay': 5.0},
'min_throttle': -0.2,
'steer': 0.8}
```
|
araffin/tqc-donkey-avc-sparkfun-v0 | araffin | 2022-10-11T15:28:04Z | 18 | 1 | stable-baselines3 | [
"stable-baselines3",
"donkey-avc-sparkfun-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-03T20:43:54Z | ---
library_name: stable-baselines3
tags:
- donkey-avc-sparkfun-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: 552.57 +/- 285.35
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: donkey-avc-sparkfun-v0
type: donkey-avc-sparkfun-v0
---
# **TQC** Agent playing **donkey-avc-sparkfun-v0**
This is a trained model of a **TQC** agent playing **donkey-avc-sparkfun-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Autoencoder: https://github.com/araffin/aae-train-donkeycar branch: `feat/race_june` <br/>
Gym env: https://github.com/araffin/gym-donkeycar-1 branch: `feat/race_june` <br/>
RL Zoo branch: `feat/gym-donkeycar`
**Pretrained autoencoder** can be downloaded here: https://github.com/araffin/aae-train-donkeycar/releases/download/live-twitch-2/ae-32_avc.pkl
```
# Export path to autoencoder
export AE_PATH=/path/to/ae-32_avc.pkl
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env donkey-avc-sparkfun-v0 -orga araffin -f logs/
python enjoy.py --algo tqc --env donkey-avc-sparkfun-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env donkey-avc-sparkfun-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env donkey-avc-sparkfun-v0 -f logs/ -orga araffin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 200000),
('callback',
[{'rl_zoo3.callbacks.ParallelTrainCallback': {'gradient_steps': 200}},
'rl_zoo3.callbacks.LapTimeCallback']),
('ent_coef', 'auto'),
('env_wrapper',
['ae.wrapper.AutoencoderWrapper',
{'rl_zoo3.wrappers.HistoryWrapper': {'horizon': 2}}]),
('gamma', 0.99),
('gradient_steps', 256),
('learning_rate', 0.00073),
('learning_starts', 500),
('n_timesteps', 2000000.0),
('normalize', "{'norm_obs': True, 'norm_reward': False}"),
('policy', 'MlpPolicy'),
('policy_kwargs',
'dict(log_std_init=-3, net_arch=[256, 256], n_critics=2, '
'use_expln=True)'),
('sde_sample_freq', 16),
('tau', 0.02),
('train_freq', 200),
('use_sde', True),
('use_sde_at_warmup', True),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
# Environment Arguments
```python
{'conf': {'cam_resolution': (120, 160, 3),
'car_config': {'body_rgb': (226, 112, 18),
'body_style': 'donkey',
'car_name': 'Toni',
'font_size': 40},
'frame_skip': 1,
'host': 'localhost',
'level': 'sparkfun_avc',
'log_level': 20,
'max_cte': 16,
'port': 9091,
'start_delay': 5.0},
'min_throttle': -0.2,
'steer': 0.3}
```
|
araffin/tqc-donkey-mountain-track-v0 | araffin | 2022-10-11T15:27:57Z | 8 | 0 | stable-baselines3 | [
"stable-baselines3",
"donkey-mountain-track-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-04T20:07:42Z | ---
library_name: stable-baselines3
tags:
- donkey-mountain-track-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: 363.88 +/- 0.94
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: donkey-mountain-track-v0
type: donkey-mountain-track-v0
---
# **TQC** Agent playing **donkey-mountain-track-v0**
This is a trained model of a **TQC** agent playing **donkey-mountain-track-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Autoencoder: https://github.com/araffin/aae-train-donkeycar branch: `feat/race_june` <br/>
Gym env: https://github.com/araffin/gym-donkeycar-1 branch: `feat/race_june` <br/>
RL Zoo branch: `feat/gym-donkeycar`
**Pretrained autoencoder** can be downloaded here: https://github.com/araffin/aae-train-donkeycar/releases/download/live-twitch-2/ae-32_mountain.pkl
```
export AE_PATH=/path/to/ae-32_mountain.pkl
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env donkey-mountain-track-v0 -orga araffin -f logs/
python enjoy.py --algo tqc --env donkey-mountain-track-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env donkey-mountain-track-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env donkey-mountain-track-v0 -f logs/ -orga araffin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 200000),
('callback',
[{'rl_zoo3.callbacks.ParallelTrainCallback': {'gradient_steps': 200}},
'rl_zoo3.callbacks.LapTimeCallback']),
('ent_coef', 'auto'),
('env_wrapper',
[{'gym.wrappers.time_limit.TimeLimit': {'max_episode_steps': 10000}},
'ae.wrapper.AutoencoderWrapper',
{'rl_zoo3.wrappers.HistoryWrapper': {'horizon': 2}}]),
('gamma', 0.99),
('gradient_steps', 256),
('learning_rate', 0.00073),
('learning_starts', 500),
('n_timesteps', 2000000.0),
('normalize', "{'norm_obs': True, 'norm_reward': False}"),
('policy', 'MlpPolicy'),
('policy_kwargs',
'dict(log_std_init=-3, net_arch=[256, 256], n_critics=2, '
'use_expln=True)'),
('sde_sample_freq', 16),
('tau', 0.02),
('train_freq', 200),
('use_sde', True),
('use_sde_at_warmup', True),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
# Environment Arguments
```python
{'conf': {'cam_resolution': (120, 160, 3),
'car_config': {'body_rgb': (226, 112, 18),
'body_style': 'donkey',
'car_name': 'Toni',
'font_size': 40},
'frame_skip': 1,
'host': 'localhost',
'level': 'mountain_track',
'log_level': 20,
'max_cte': 16,
'port': 9091,
'start_delay': 5.0},
'min_throttle': -0.2,
'steer': 0.3}
```
|
araffin/dqn-LunarLander-v2 | araffin | 2022-10-11T15:23:48Z | 1 | 2 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-05T21:30:18Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 280.22 +/- 13.03
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-Baselines3)
```python
from huggingface_sb3 import load_from_hub
from stable_baselines3 import DQN
from stable_baselines3.common.env_util import make_vec_env
from stable_baselines3.common.evaluation import evaluate_policy
# Download checkpoint
checkpoint = load_from_hub("araffin/dqn-LunarLander-v2", "dqn-LunarLander-v2.zip")
# Remove warning
kwargs = dict(target_update_interval=30)
# Load the model
model = DQN.load(checkpoint, **kwargs)
env = make_vec_env("LunarLander-v2", n_envs=1)
# Evaluate
print("Evaluating model")
mean_reward, std_reward = evaluate_policy(
model,
env,
n_eval_episodes=20,
deterministic=True,
)
print(f"Mean reward = {mean_reward:.2f} +/- {std_reward:.2f}")
# Start a new episode
obs = env.reset()
try:
while True:
action, _states = model.predict(obs, deterministic=True)
obs, rewards, dones, info = env.step(action)
env.render()
except KeyboardInterrupt:
pass
```
## Training Code (with Stable-baselines3)
```python
from stable_baselines3 import DQN
from stable_baselines3.common.env_util import make_vec_env
from stable_baselines3.common.callbacks import EvalCallback
# Create the environment
env_id = "LunarLander-v2"
n_envs = 8
env = make_vec_env(env_id, n_envs=n_envs)
# Create the evaluation envs
eval_envs = make_vec_env(env_id, n_envs=5)
# Adjust evaluation interval depending on the number of envs
eval_freq = int(1e5)
eval_freq = max(eval_freq // n_envs, 1)
# Create evaluation callback to save best model
# and monitor agent performance
eval_callback = EvalCallback(
eval_envs,
best_model_save_path="./logs/",
eval_freq=eval_freq,
n_eval_episodes=10,
)
# Instantiate the agent
# Hyperparameters from https://github.com/DLR-RM/rl-baselines3-zoo
model = DQN(
"MlpPolicy",
env,
learning_starts=0,
batch_size=128,
buffer_size=100000,
learning_rate=7e-4,
target_update_interval=250,
train_freq=1,
gradient_steps=4,
# Explore for 40_000 timesteps
exploration_fraction=0.08,
exploration_final_eps=0.05,
policy_kwargs=dict(net_arch=[256, 256]),
verbose=1,
)
# Train the agent (you can kill it before using ctrl+c)
try:
model.learn(total_timesteps=int(5e5), callback=eval_callback)
except KeyboardInterrupt:
pass
# Load best model
model = DQN.load("logs/best_model.zip")
```
|
sb3/tqc-FetchPush-v1 | sb3 | 2022-10-11T15:19:44Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"FetchPush-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T20:55:29Z | ---
library_name: stable-baselines3
tags:
- FetchPush-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: -11.60 +/- 6.20
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FetchPush-v1
type: FetchPush-v1
---
# **TQC** Agent playing **FetchPush-v1**
This is a trained model of a **TQC** agent playing **FetchPush-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env FetchPush-v1 -orga sb3 -f logs/
python enjoy.py --algo tqc --env FetchPush-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env FetchPush-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env FetchPush-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('learning_rate', 0.001),
('n_timesteps', 1000000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='future', "
'n_sampled_goal=4, max_episode_length=100 )'),
('tau', 0.005),
('normalize', False)])
```
|
sb3/ddpg-Walker2DBulletEnv-v0 | sb3 | 2022-10-11T15:19:35Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"Walker2DBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T20:43:12Z | ---
library_name: stable-baselines3
tags:
- Walker2DBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DDPG
results:
- metrics:
- type: mean_reward
value: 1495.73 +/- 612.27
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2DBulletEnv-v0
type: Walker2DBulletEnv-v0
---
# **DDPG** Agent playing **Walker2DBulletEnv-v0**
This is a trained model of a **DDPG** agent playing **Walker2DBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ddpg --env Walker2DBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo ddpg --env Walker2DBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ddpg --env Walker2DBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ddpg --env Walker2DBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('gradient_steps', -1),
('learning_rate', 0.0007),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[400, 300])'),
('train_freq', [1, 'episode']),
('normalize', False)])
```
|
sb3/ddpg-HopperBulletEnv-v0 | sb3 | 2022-10-11T15:19:15Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"HopperBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T20:20:17Z | ---
library_name: stable-baselines3
tags:
- HopperBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DDPG
results:
- metrics:
- type: mean_reward
value: 852.06 +/- 505.72
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HopperBulletEnv-v0
type: HopperBulletEnv-v0
---
# **DDPG** Agent playing **HopperBulletEnv-v0**
This is a trained model of a **DDPG** agent playing **HopperBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ddpg --env HopperBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo ddpg --env HopperBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ddpg --env HopperBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ddpg --env HopperBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('gradient_steps', 64),
('learning_rate', 0.0007),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[400, 300])'),
('train_freq', 64),
('normalize', False)])
```
|
sb3/ddpg-AntBulletEnv-v0 | sb3 | 2022-10-11T15:19:05Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T20:18:11Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DDPG
results:
- metrics:
- type: mean_reward
value: 2426.59 +/- 58.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **DDPG** Agent playing **AntBulletEnv-v0**
This is a trained model of a **DDPG** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ddpg --env AntBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo ddpg --env AntBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ddpg --env AntBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ddpg --env AntBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('buffer_size', 200000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('gradient_steps', -1),
('learning_rate', 0.0007),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[400, 300])'),
('train_freq', [1, 'episode']),
('normalize', False)])
```
|
sb3/tqc-Ant-v3 | sb3 | 2022-10-11T15:19:01Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"Ant-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T20:17:17Z | ---
library_name: stable-baselines3
tags:
- Ant-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: 2656.30 +/- 1954.85
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v3
type: Ant-v3
---
# **TQC** Agent playing **Ant-v3**
This is a trained model of a **TQC** agent playing **Ant-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Ant-v3 -orga sb3 -f logs/
python enjoy.py --algo tqc --env Ant-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env Ant-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Ant-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('use_sde', False),
('normalize', False)])
```
|
sb3/tqc-Walker2DBulletEnv-v0 | sb3 | 2022-10-11T15:18:51Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"Walker2DBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T19:57:09Z | ---
library_name: stable-baselines3
tags:
- Walker2DBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: 2668.35 +/- 15.34
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2DBulletEnv-v0
type: Walker2DBulletEnv-v0
---
# **TQC** Agent playing **Walker2DBulletEnv-v0**
This is a trained model of a **TQC** agent playing **Walker2DBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Walker2DBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo tqc --env Walker2DBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env Walker2DBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Walker2DBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 300000),
('ent_coef', 'auto'),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('gradient_steps', 64),
('learning_rate', 'lin_7.3e-4'),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-3, net_arch=[400, 300])'),
('tau', 0.02),
('train_freq', 64),
('use_sde', True),
('normalize', False)])
```
|
sb3/tqc-BipedalWalkerHardcore-v3 | sb3 | 2022-10-11T15:18:46Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalkerHardcore-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T19:39:41Z | ---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: 208.05 +/- 121.38
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
---
# **TQC** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **TQC** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env BipedalWalkerHardcore-v3 -orga sb3 -f logs/
python enjoy.py --algo tqc --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env BipedalWalkerHardcore-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 1000000),
('ent_coef', 'auto'),
('gamma', 0.99),
('gradient_steps', 1),
('learning_rate', 'lin_7.3e-4'),
('learning_starts', 10000),
('n_timesteps', 2000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[400, 300])'),
('tau', 0.01),
('train_freq', 1),
('normalize', False)])
```
|
sb3/tqc-BipedalWalker-v3 | sb3 | 2022-10-11T15:18:41Z | 7 | 1 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T19:20:35Z | ---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: 334.73 +/- 0.28
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
---
# **TQC** Agent playing **BipedalWalker-v3**
This is a trained model of a **TQC** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env BipedalWalker-v3 -orga sb3 -f logs/
python enjoy.py --algo tqc --env BipedalWalker-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env BipedalWalker-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env BipedalWalker-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 300000),
('ent_coef', 'auto'),
('gamma', 0.98),
('gradient_steps', 64),
('learning_rate', 0.00073),
('learning_starts', 10000),
('n_timesteps', 500000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-3, net_arch=[400, 300])'),
('tau', 0.02),
('train_freq', 64),
('use_sde', True),
('normalize', False)])
```
|
sb3/tqc-HalfCheetahBulletEnv-v0 | sb3 | 2022-10-11T15:18:32Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"HalfCheetahBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T18:58:07Z | ---
library_name: stable-baselines3
tags:
- HalfCheetahBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: 3665.83 +/- 28.42
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetahBulletEnv-v0
type: HalfCheetahBulletEnv-v0
---
# **TQC** Agent playing **HalfCheetahBulletEnv-v0**
This is a trained model of a **TQC** agent playing **HalfCheetahBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env HalfCheetahBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo tqc --env HalfCheetahBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env HalfCheetahBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env HalfCheetahBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 300000),
('ent_coef', 'auto'),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('gradient_steps', 64),
('learning_rate', 0.00073),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-3, net_arch=[400, 300])'),
('tau', 0.02),
('train_freq', 64),
('use_sde', True),
('normalize', False)])
```
|
sb3/tqc-HopperBulletEnv-v0 | sb3 | 2022-10-11T15:18:27Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"HopperBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T18:57:00Z | ---
library_name: stable-baselines3
tags:
- HopperBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: 2681.19 +/- 26.82
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HopperBulletEnv-v0
type: HopperBulletEnv-v0
---
# **TQC** Agent playing **HopperBulletEnv-v0**
This is a trained model of a **TQC** agent playing **HopperBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env HopperBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo tqc --env HopperBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env HopperBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env HopperBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 300000),
('ent_coef', 'auto'),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('gradient_steps', 64),
('learning_rate', 'lin_7.3e-4'),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-3, net_arch=[400, 300])'),
('tau', 0.02),
('top_quantiles_to_drop_per_net', 5),
('train_freq', 64),
('use_sde', True),
('normalize', False)])
```
|
sb3/tqc-Swimmer-v3 | sb3 | 2022-10-11T15:18:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"Swimmer-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T18:53:58Z | ---
library_name: stable-baselines3
tags:
- Swimmer-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: 339.95 +/- 0.80
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v3
type: Swimmer-v3
---
# **TQC** Agent playing **Swimmer-v3**
This is a trained model of a **TQC** agent playing **Swimmer-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Swimmer-v3 -orga sb3 -f logs/
python enjoy.py --algo tqc --env Swimmer-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env Swimmer-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Swimmer-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('gamma', 0.9999),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('use_sde', False),
('normalize', False)])
```
|
sb3/dqn-QbertNoFrameskip-v4 | sb3 | 2022-10-11T15:17:57Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"QbertNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T18:27:25Z | ---
library_name: stable-baselines3
tags:
- QbertNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 5300.00 +/- 6528.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: QbertNoFrameskip-v4
type: QbertNoFrameskip-v4
---
# **DQN** Agent playing **QbertNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **QbertNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env QbertNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo dqn --env QbertNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env QbertNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env QbertNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
sb3/dqn-AsteroidsNoFrameskip-v4 | sb3 | 2022-10-11T15:17:52Z | 914 | 0 | stable-baselines3 | [
"stable-baselines3",
"AsteroidsNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T18:26:18Z | ---
library_name: stable-baselines3
tags:
- AsteroidsNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 853.00 +/- 286.64
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AsteroidsNoFrameskip-v4
type: AsteroidsNoFrameskip-v4
---
# **DQN** Agent playing **AsteroidsNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **AsteroidsNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env AsteroidsNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo dqn --env AsteroidsNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env AsteroidsNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env AsteroidsNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
sb3/dqn-SeaquestNoFrameskip-v4 | sb3 | 2022-10-11T15:17:47Z | 450 | 0 | stable-baselines3 | [
"stable-baselines3",
"SeaquestNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T18:24:51Z | ---
library_name: stable-baselines3
tags:
- SeaquestNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 2286.00 +/- 815.45
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SeaquestNoFrameskip-v4
type: SeaquestNoFrameskip-v4
---
# **DQN** Agent playing **SeaquestNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SeaquestNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SeaquestNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo dqn --env SeaquestNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SeaquestNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SeaquestNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
sb3/dqn-EnduroNoFrameskip-v4 | sb3 | 2022-10-11T15:17:38Z | 205 | 0 | stable-baselines3 | [
"stable-baselines3",
"EnduroNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T18:17:01Z | ---
library_name: stable-baselines3
tags:
- EnduroNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 889.40 +/- 213.13
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: EnduroNoFrameskip-v4
type: EnduroNoFrameskip-v4
---
# **DQN** Agent playing **EnduroNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **EnduroNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env EnduroNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo dqn --env EnduroNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env EnduroNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env EnduroNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
sb3/dqn-Acrobot-v1 | sb3 | 2022-10-11T15:17:33Z | 2,654 | 0 | stable-baselines3 | [
"stable-baselines3",
"Acrobot-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T18:06:27Z | ---
library_name: stable-baselines3
tags:
- Acrobot-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -72.10 +/- 6.44
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Acrobot-v1
type: Acrobot-v1
---
# **DQN** Agent playing **Acrobot-v1**
This is a trained model of a **DQN** agent playing **Acrobot-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env Acrobot-v1 -orga sb3 -f logs/
python enjoy.py --algo dqn --env Acrobot-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env Acrobot-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env Acrobot-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 50000),
('exploration_final_eps', 0.1),
('exploration_fraction', 0.12),
('gamma', 0.99),
('gradient_steps', -1),
('learning_rate', 0.00063),
('learning_starts', 0),
('n_timesteps', 100000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[256, 256])'),
('target_update_interval', 250),
('train_freq', 4),
('normalize', False)])
```
|
sb3/ars-CartPole-v1 | sb3 | 2022-10-11T15:17:23Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T17:30:01Z | ---
library_name: stable-baselines3
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ARS
results:
- metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **ARS** Agent playing **CartPole-v1**
This is a trained model of a **ARS** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ars --env CartPole-v1 -orga sb3 -f logs/
python enjoy.py --algo ars --env CartPole-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ars --env CartPole-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ars --env CartPole-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('n_delta', 2),
('n_envs', 1),
('n_timesteps', 50000.0),
('policy', 'LinearPolicy'),
('normalize', False)])
```
|
sb3/ars-MountainCarContinuous-v0 | sb3 | 2022-10-11T15:17:18Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"MountainCarContinuous-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T17:26:16Z | ---
library_name: stable-baselines3
tags:
- MountainCarContinuous-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ARS
results:
- metrics:
- type: mean_reward
value: 96.50 +/- 0.78
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCarContinuous-v0
type: MountainCarContinuous-v0
---
# **ARS** Agent playing **MountainCarContinuous-v0**
This is a trained model of a **ARS** agent playing **MountainCarContinuous-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ars --env MountainCarContinuous-v0 -orga sb3 -f logs/
python enjoy.py --algo ars --env MountainCarContinuous-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ars --env MountainCarContinuous-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ars --env MountainCarContinuous-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('delta_std', 0.2),
('learning_rate', 0.018),
('n_delta', 4),
('n_envs', 8),
('n_timesteps', 500000.0),
('n_top', 1),
('normalize', 'dict(norm_obs=True, norm_reward=False)'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[16])'),
('zero_policy', False),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ars-Pendulum-v1 | sb3 | 2022-10-11T15:17:08Z | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"Pendulum-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T17:24:44Z | ---
library_name: stable-baselines3
tags:
- Pendulum-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ARS
results:
- metrics:
- type: mean_reward
value: -282.08 +/- 194.51
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pendulum-v1
type: Pendulum-v1
---
# **ARS** Agent playing **Pendulum-v1**
This is a trained model of a **ARS** agent playing **Pendulum-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ars --env Pendulum-v1 -orga sb3 -f logs/
python enjoy.py --algo ars --env Pendulum-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ars --env Pendulum-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ars --env Pendulum-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('delta_std', 0.1),
('learning_rate', 0.018),
('n_delta', 4),
('n_envs', 8),
('n_timesteps', 2000000.0),
('n_top', 1),
('normalize', 'dict(norm_obs=True, norm_reward=False)'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[16])'),
('zero_policy', False),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ars-HalfCheetah-v3 | sb3 | 2022-10-11T15:16:54Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"HalfCheetah-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T17:22:11Z | ---
library_name: stable-baselines3
tags:
- HalfCheetah-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ARS
results:
- metrics:
- type: mean_reward
value: 4046.14 +/- 2253.39
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v3
type: HalfCheetah-v3
---
# **ARS** Agent playing **HalfCheetah-v3**
This is a trained model of a **ARS** agent playing **HalfCheetah-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ars --env HalfCheetah-v3 -orga sb3 -f logs/
python enjoy.py --algo ars --env HalfCheetah-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ars --env HalfCheetah-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ars --env HalfCheetah-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('alive_bonus_offset', 0),
('delta_std', 0.03),
('learning_rate', 0.02),
('n_delta', 32),
('n_envs', 16),
('n_timesteps', 12500000.0),
('n_top', 4),
('normalize', 'dict(norm_obs=True, norm_reward=False)'),
('policy', 'LinearPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ars-Acrobot-v1 | sb3 | 2022-10-11T15:16:49Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"Acrobot-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T17:21:26Z | ---
library_name: stable-baselines3
tags:
- Acrobot-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ARS
results:
- metrics:
- type: mean_reward
value: -81.60 +/- 11.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Acrobot-v1
type: Acrobot-v1
---
# **ARS** Agent playing **Acrobot-v1**
This is a trained model of a **ARS** agent playing **Acrobot-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ars --env Acrobot-v1 -orga sb3 -f logs/
python enjoy.py --algo ars --env Acrobot-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ars --env Acrobot-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ars --env Acrobot-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('delta_std', 0.1),
('learning_rate', 0.018),
('n_delta', 4),
('n_envs', 8),
('n_timesteps', 500000.0),
('n_top', 1),
('normalize', 'dict(norm_obs=True, norm_reward=False)'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[16])'),
('zero_policy', False),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/sac-MountainCarContinuous-v0 | sb3 | 2022-10-11T15:16:34Z | 1,219 | 0 | stable-baselines3 | [
"stable-baselines3",
"MountainCarContinuous-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T17:07:55Z | ---
library_name: stable-baselines3
tags:
- MountainCarContinuous-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- metrics:
- type: mean_reward
value: 94.53 +/- 1.26
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCarContinuous-v0
type: MountainCarContinuous-v0
---
# **SAC** Agent playing **MountainCarContinuous-v0**
This is a trained model of a **SAC** agent playing **MountainCarContinuous-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env MountainCarContinuous-v0 -orga sb3 -f logs/
python enjoy.py --algo sac --env MountainCarContinuous-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo sac --env MountainCarContinuous-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env MountainCarContinuous-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('buffer_size', 50000),
('ent_coef', 0.1),
('gamma', 0.9999),
('gradient_steps', 32),
('learning_rate', 0.0003),
('learning_starts', 0),
('n_timesteps', 50000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-3.67, net_arch=[64, 64])'),
('tau', 0.01),
('train_freq', 32),
('use_sde', True),
('normalize', False)])
```
|
sb3/sac-BipedalWalker-v3 | sb3 | 2022-10-11T15:16:20Z | 2,187 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T17:04:41Z | ---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- metrics:
- type: mean_reward
value: 300.53 +/- 0.76
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
---
# **SAC** Agent playing **BipedalWalker-v3**
This is a trained model of a **SAC** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env BipedalWalker-v3 -orga sb3 -f logs/
python enjoy.py --algo sac --env BipedalWalker-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo sac --env BipedalWalker-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env BipedalWalker-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 300000),
('ent_coef', 'auto'),
('gamma', 0.98),
('gradient_steps', 64),
('learning_rate', 0.00073),
('learning_starts', 10000),
('n_timesteps', 500000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-3, net_arch=[400, 300])'),
('tau', 0.02),
('train_freq', 64),
('use_sde', True),
('normalize', False)])
```
|
sb3/sac-LunarLanderContinuous-v2 | sb3 | 2022-10-11T15:16:15Z | 1,706 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLanderContinuous-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T16:54:00Z | ---
library_name: stable-baselines3
tags:
- LunarLanderContinuous-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- metrics:
- type: mean_reward
value: 251.89 +/- 71.75
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLanderContinuous-v2
type: LunarLanderContinuous-v2
---
# **SAC** Agent playing **LunarLanderContinuous-v2**
This is a trained model of a **SAC** agent playing **LunarLanderContinuous-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env LunarLanderContinuous-v2 -orga sb3 -f logs/
python enjoy.py --algo sac --env LunarLanderContinuous-v2 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo sac --env LunarLanderContinuous-v2 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env LunarLanderContinuous-v2 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 1000000),
('ent_coef', 'auto'),
('gamma', 0.99),
('gradient_steps', 1),
('learning_rate', 'lin_7.3e-4'),
('learning_starts', 10000),
('n_timesteps', 500000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[400, 300])'),
('tau', 0.01),
('train_freq', 1),
('normalize', False)])
```
|
sb3/sac-HopperBulletEnv-v0 | sb3 | 2022-10-11T15:16:05Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"HopperBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T16:51:34Z | ---
library_name: stable-baselines3
tags:
- HopperBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- metrics:
- type: mean_reward
value: 2592.61 +/- 97.32
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HopperBulletEnv-v0
type: HopperBulletEnv-v0
---
# **SAC** Agent playing **HopperBulletEnv-v0**
This is a trained model of a **SAC** agent playing **HopperBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env HopperBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo sac --env HopperBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo sac --env HopperBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env HopperBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 300000),
('ent_coef', 'auto'),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('gradient_steps', 64),
('learning_rate', 'lin_7.3e-4'),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-3, net_arch=[400, 300])'),
('tau', 0.02),
('train_freq', 64),
('use_sde', True),
('normalize', False)])
```
|
sb3/sac-ReacherBulletEnv-v0 | sb3 | 2022-10-11T15:16:00Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"ReacherBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T16:50:33Z | ---
library_name: stable-baselines3
tags:
- ReacherBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- metrics:
- type: mean_reward
value: 20.22 +/- 10.96
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: ReacherBulletEnv-v0
type: ReacherBulletEnv-v0
---
# **SAC** Agent playing **ReacherBulletEnv-v0**
This is a trained model of a **SAC** agent playing **ReacherBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env ReacherBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo sac --env ReacherBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo sac --env ReacherBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env ReacherBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 300000),
('ent_coef', 'auto'),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('gradient_steps', 64),
('learning_rate', 0.00073),
('learning_starts', 10000),
('n_timesteps', 300000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-3, net_arch=[400, 300])'),
('tau', 0.02),
('train_freq', 64),
('use_sde', True),
('normalize', False)])
```
|
sb3/sac-AntBulletEnv-v0 | sb3 | 2022-10-11T15:15:51Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T16:47:23Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- metrics:
- type: mean_reward
value: 3102.39 +/- 50.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **SAC** Agent playing **AntBulletEnv-v0**
This is a trained model of a **SAC** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env AntBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo sac --env AntBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo sac --env AntBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env AntBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 300000),
('ent_coef', 'auto'),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('gradient_steps', 64),
('learning_rate', 0.00073),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-3, net_arch=[400, 300])'),
('tau', 0.02),
('train_freq', 64),
('use_sde', True),
('normalize', False)])
```
|
sb3/sac-HalfCheetah-v3 | sb3 | 2022-10-11T15:15:46Z | 1,387 | 2 | stable-baselines3 | [
"stable-baselines3",
"HalfCheetah-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T16:46:24Z | ---
library_name: stable-baselines3
tags:
- HalfCheetah-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- metrics:
- type: mean_reward
value: 9564.23 +/- 79.21
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v3
type: HalfCheetah-v3
---
# **SAC** Agent playing **HalfCheetah-v3**
This is a trained model of a **SAC** agent playing **HalfCheetah-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env HalfCheetah-v3 -orga sb3 -f logs/
python enjoy.py --algo sac --env HalfCheetah-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo sac --env HalfCheetah-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env HalfCheetah-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('use_sde', False),
('normalize', False)])
```
|
sb3/a2c-BeamRiderNoFrameskip-v4 | sb3 | 2022-10-11T15:15:31Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"BeamRiderNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T16:42:23Z | ---
library_name: stable-baselines3
tags:
- BeamRiderNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 2954.60 +/- 1104.47
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BeamRiderNoFrameskip-v4
type: BeamRiderNoFrameskip-v4
---
# **A2C** Agent playing **BeamRiderNoFrameskip-v4**
This is a trained model of a **A2C** agent playing **BeamRiderNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env BeamRiderNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo a2c --env BeamRiderNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env BeamRiderNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env BeamRiderNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('n_envs', 16),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(optimizer_class=RMSpropTFLike, '
'optimizer_kwargs=dict(eps=1e-5))'),
('vf_coef', 0.25),
('normalize', False)])
```
|
sb3/a2c-Walker2DBulletEnv-v0 | sb3 | 2022-10-11T15:15:27Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"Walker2DBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T16:41:17Z | ---
library_name: stable-baselines3
tags:
- Walker2DBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 809.75 +/- 376.19
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2DBulletEnv-v0
type: Walker2DBulletEnv-v0
---
# **A2C** Agent playing **Walker2DBulletEnv-v0**
This is a trained model of a **A2C** agent playing **Walker2DBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env Walker2DBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo a2c --env Walker2DBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env Walker2DBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env Walker2DBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.00096'),
('max_grad_norm', 0.5),
('n_envs', 4),
('n_steps', 8),
('n_timesteps', 2000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/a2c-AsteroidsNoFrameskip-v4 | sb3 | 2022-10-11T15:15:02Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AsteroidsNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T15:47:28Z | ---
library_name: stable-baselines3
tags:
- AsteroidsNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1614.00 +/- 630.75
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AsteroidsNoFrameskip-v4
type: AsteroidsNoFrameskip-v4
---
# **A2C** Agent playing **AsteroidsNoFrameskip-v4**
This is a trained model of a **A2C** agent playing **AsteroidsNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env AsteroidsNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo a2c --env AsteroidsNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env AsteroidsNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env AsteroidsNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('n_envs', 16),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(optimizer_class=RMSpropTFLike, '
'optimizer_kwargs=dict(eps=1e-5))'),
('vf_coef', 0.25),
('normalize', False)])
```
|
sb3/a2c-SeaquestNoFrameskip-v4 | sb3 | 2022-10-11T15:14:58Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SeaquestNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T15:46:06Z | ---
library_name: stable-baselines3
tags:
- SeaquestNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1706.00 +/- 95.94
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SeaquestNoFrameskip-v4
type: SeaquestNoFrameskip-v4
---
# **A2C** Agent playing **SeaquestNoFrameskip-v4**
This is a trained model of a **A2C** agent playing **SeaquestNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env SeaquestNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo a2c --env SeaquestNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env SeaquestNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env SeaquestNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('n_envs', 16),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(optimizer_class=RMSpropTFLike, '
'optimizer_kwargs=dict(eps=1e-5))'),
('vf_coef', 0.25),
('normalize', False)])
```
|
sb3/a2c-HopperBulletEnv-v0 | sb3 | 2022-10-11T15:14:53Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"HopperBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T15:26:51Z | ---
library_name: stable-baselines3
tags:
- HopperBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 709.34 +/- 213.24
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HopperBulletEnv-v0
type: HopperBulletEnv-v0
---
# **A2C** Agent playing **HopperBulletEnv-v0**
This is a trained model of a **A2C** agent playing **HopperBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env HopperBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo a2c --env HopperBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env HopperBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env HopperBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.00096'),
('max_grad_norm', 0.5),
('n_envs', 4),
('n_steps', 8),
('n_timesteps', 2000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/a2c-SpaceInvadersNoFrameskip-v4 | sb3 | 2022-10-11T15:14:48Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T15:25:50Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 803.50 +/- 323.72
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **A2C** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **A2C** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env SpaceInvadersNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo a2c --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('n_envs', 16),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(optimizer_class=RMSpropTFLike, '
'optimizer_kwargs=dict(eps=1e-5))'),
('vf_coef', 0.25),
('normalize', False)])
```
|
sb3/a2c-ReacherBulletEnv-v0 | sb3 | 2022-10-11T15:14:43Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"ReacherBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T15:24:56Z | ---
library_name: stable-baselines3
tags:
- ReacherBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 14.07 +/- 14.27
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: ReacherBulletEnv-v0
type: ReacherBulletEnv-v0
---
# **A2C** Agent playing **ReacherBulletEnv-v0**
This is a trained model of a **A2C** agent playing **ReacherBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env ReacherBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo a2c --env ReacherBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env ReacherBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env ReacherBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.0008'),
('max_grad_norm', 0.5),
('n_envs', 4),
('n_steps', 8),
('n_timesteps', 2000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/a2c-EnduroNoFrameskip-v4 | sb3 | 2022-10-11T15:14:28Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"EnduroNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T15:21:05Z | ---
library_name: stable-baselines3
tags:
- EnduroNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: EnduroNoFrameskip-v4
type: EnduroNoFrameskip-v4
---
# **A2C** Agent playing **EnduroNoFrameskip-v4**
This is a trained model of a **A2C** agent playing **EnduroNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env EnduroNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo a2c --env EnduroNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env EnduroNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env EnduroNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('n_envs', 16),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(optimizer_class=RMSpropTFLike, '
'optimizer_kwargs=dict(eps=1e-5))'),
('vf_coef', 0.25),
('normalize', False)])
```
|
sb3/a2c-AntBulletEnv-v0 | sb3 | 2022-10-11T15:14:18Z | 11 | 1 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T15:19:00Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 2519.30 +/- 10.68
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env AntBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo a2c --env AntBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env AntBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env AntBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.00096'),
('max_grad_norm', 0.5),
('n_envs', 4),
('n_steps', 8),
('n_timesteps', 2000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/qrdqn-CartPole-v1 | sb3 | 2022-10-11T15:13:59Z | 13 | 0 | stable-baselines3 | [
"stable-baselines3",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T14:58:44Z | ---
library_name: stable-baselines3
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **QRDQN** Agent playing **CartPole-v1**
This is a trained model of a **QRDQN** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo qrdqn --env CartPole-v1 -orga sb3 -f logs/
python enjoy.py --algo qrdqn --env CartPole-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env CartPole-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo qrdqn --env CartPole-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('exploration_final_eps', 0.04),
('exploration_fraction', 0.16),
('gamma', 0.99),
('gradient_steps', 128),
('learning_rate', 0.0023),
('learning_starts', 1000),
('n_timesteps', 50000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[256, 256], n_quantiles=10)'),
('target_update_interval', 10),
('train_freq', 256),
('normalize', False)])
```
|
sb3/qrdqn-QbertNoFrameskip-v4 | sb3 | 2022-10-11T15:13:49Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"QbertNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T14:53:59Z | ---
library_name: stable-baselines3
tags:
- QbertNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: 15395.00 +/- 1138.35
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: QbertNoFrameskip-v4
type: QbertNoFrameskip-v4
---
# **QRDQN** Agent playing **QbertNoFrameskip-v4**
This is a trained model of a **QRDQN** agent playing **QbertNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo qrdqn --env QbertNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo qrdqn --env QbertNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env QbertNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo qrdqn --env QbertNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_fraction', 0.025),
('frame_stack', 4),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('normalize', False)])
```
|
sb3/qrdqn-AsteroidsNoFrameskip-v4 | sb3 | 2022-10-11T15:13:44Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AsteroidsNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T14:52:40Z | ---
library_name: stable-baselines3
tags:
- AsteroidsNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: 1896.00 +/- 480.65
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AsteroidsNoFrameskip-v4
type: AsteroidsNoFrameskip-v4
---
# **QRDQN** Agent playing **AsteroidsNoFrameskip-v4**
This is a trained model of a **QRDQN** agent playing **AsteroidsNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo qrdqn --env AsteroidsNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo qrdqn --env AsteroidsNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env AsteroidsNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo qrdqn --env AsteroidsNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_fraction', 0.025),
('frame_stack', 4),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('normalize', False)])
```
|
sb3/qrdqn-SpaceInvadersNoFrameskip-v4 | sb3 | 2022-10-11T15:13:35Z | 3 | 1 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T14:47:35Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: 2169.00 +/- 1108.30
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **QRDQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **QRDQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_fraction', 0.025),
('frame_stack', 4),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('normalize', False)])
```
|
sb3/qrdqn-RoadRunnerNoFrameskip-v4 | sb3 | 2022-10-11T15:13:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"RoadRunnerNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T14:46:21Z | ---
library_name: stable-baselines3
tags:
- RoadRunnerNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: 920.00 +/- 107.70
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RoadRunnerNoFrameskip-v4
type: RoadRunnerNoFrameskip-v4
---
# **QRDQN** Agent playing **RoadRunnerNoFrameskip-v4**
This is a trained model of a **QRDQN** agent playing **RoadRunnerNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo qrdqn --env RoadRunnerNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo qrdqn --env RoadRunnerNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env RoadRunnerNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo qrdqn --env RoadRunnerNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_fraction', 0.025),
('frame_stack', 4),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('normalize', False)])
```
|
sb3/qrdqn-EnduroNoFrameskip-v4 | sb3 | 2022-10-11T15:13:25Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"EnduroNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T14:38:00Z | ---
library_name: stable-baselines3
tags:
- EnduroNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: 2827.70 +/- 1359.67
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: EnduroNoFrameskip-v4
type: EnduroNoFrameskip-v4
---
# **QRDQN** Agent playing **EnduroNoFrameskip-v4**
This is a trained model of a **QRDQN** agent playing **EnduroNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo qrdqn --env EnduroNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo qrdqn --env EnduroNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env EnduroNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo qrdqn --env EnduroNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_fraction', 0.025),
('frame_stack', 4),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('normalize', False)])
```
|
sb3/qrdqn-Acrobot-v1 | sb3 | 2022-10-11T15:13:20Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"Acrobot-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T14:37:14Z | ---
library_name: stable-baselines3
tags:
- Acrobot-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: -67.30 +/- 6.97
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Acrobot-v1
type: Acrobot-v1
---
# **QRDQN** Agent playing **Acrobot-v1**
This is a trained model of a **QRDQN** agent playing **Acrobot-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo qrdqn --env Acrobot-v1 -orga sb3 -f logs/
python enjoy.py --algo qrdqn --env Acrobot-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env Acrobot-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo qrdqn --env Acrobot-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 50000),
('exploration_final_eps', 0.1),
('exploration_fraction', 0.12),
('gamma', 0.99),
('gradient_steps', -1),
('learning_rate', 0.00063),
('learning_starts', 0),
('n_timesteps', 100000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[256, 256], n_quantiles=25)'),
('target_update_interval', 250),
('train_freq', 4),
('normalize', False)])
```
|
sb3/qrdqn-PongNoFrameskip-v4 | sb3 | 2022-10-11T15:13:05Z | 8 | 0 | stable-baselines3 | [
"stable-baselines3",
"PongNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T14:34:15Z | ---
library_name: stable-baselines3
tags:
- PongNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- metrics:
- type: mean_reward
value: 20.70 +/- 0.46
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PongNoFrameskip-v4
type: PongNoFrameskip-v4
---
# **QRDQN** Agent playing **PongNoFrameskip-v4**
This is a trained model of a **QRDQN** agent playing **PongNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo qrdqn --env PongNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo qrdqn --env PongNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo qrdqn --env PongNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo qrdqn --env PongNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_fraction', 0.025),
('frame_stack', 4),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('normalize', False)])
```
|
sb3/td3-BipedalWalkerHardcore-v3 | sb3 | 2022-10-11T15:12:51Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"BipedalWalkerHardcore-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T14:01:07Z | ---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TD3
results:
- metrics:
- type: mean_reward
value: -95.25 +/- 18.68
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
---
# **TD3** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **TD3** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo td3 --env BipedalWalkerHardcore-v3 -orga sb3 -f logs/
python enjoy.py --algo td3 --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo td3 --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo td3 --env BipedalWalkerHardcore-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('buffer_size', 200000),
('gamma', 0.99),
('gradient_steps', -1),
('learning_rate', 'lin_1e-3'),
('learning_starts', 10000),
('n_timesteps', 10000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[400, 300])'),
('train_freq', 1),
('normalize', False)])
```
|
sb3/td3-HalfCheetahBulletEnv-v0 | sb3 | 2022-10-11T15:12:36Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"HalfCheetahBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T13:08:24Z | ---
library_name: stable-baselines3
tags:
- HalfCheetahBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TD3
results:
- metrics:
- type: mean_reward
value: 2821.04 +/- 20.12
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetahBulletEnv-v0
type: HalfCheetahBulletEnv-v0
---
# **TD3** Agent playing **HalfCheetahBulletEnv-v0**
This is a trained model of a **TD3** agent playing **HalfCheetahBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo td3 --env HalfCheetahBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo td3 --env HalfCheetahBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo td3 --env HalfCheetahBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo td3 --env HalfCheetahBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('buffer_size', 200000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('gradient_steps', -1),
('learning_rate', 0.001),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[400, 300])'),
('train_freq', [1, 'episode']),
('normalize', False)])
```
|
sb3/td3-Hopper-v3 | sb3 | 2022-10-11T15:12:21Z | 27 | 0 | stable-baselines3 | [
"stable-baselines3",
"Hopper-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T13:05:34Z | ---
library_name: stable-baselines3
tags:
- Hopper-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TD3
results:
- metrics:
- type: mean_reward
value: 3604.63 +/- 4.84
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v3
type: Hopper-v3
---
# **TD3** Agent playing **Hopper-v3**
This is a trained model of a **TD3** agent playing **Hopper-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo td3 --env Hopper-v3 -orga sb3 -f logs/
python enjoy.py --algo td3 --env Hopper-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo td3 --env Hopper-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo td3 --env Hopper-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('gradient_steps', 1),
('learning_rate', 0.0003),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('train_freq', 1),
('normalize', False)])
```
|
sb3/td3-HalfCheetah-v3 | sb3 | 2022-10-11T15:12:17Z | 1,864 | 0 | stable-baselines3 | [
"stable-baselines3",
"HalfCheetah-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T13:02:50Z | ---
library_name: stable-baselines3
tags:
- HalfCheetah-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TD3
results:
- metrics:
- type: mean_reward
value: 9709.01 +/- 104.84
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v3
type: HalfCheetah-v3
---
# **TD3** Agent playing **HalfCheetah-v3**
This is a trained model of a **TD3** agent playing **HalfCheetah-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo td3 --env HalfCheetah-v3 -orga sb3 -f logs/
python enjoy.py --algo td3 --env HalfCheetah-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo td3 --env HalfCheetah-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo td3 --env HalfCheetah-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
sb3/ppo-Walker2DBulletEnv-v0 | sb3 | 2022-10-11T15:11:57Z | 8 | 0 | stable-baselines3 | [
"stable-baselines3",
"Walker2DBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T12:57:16Z | ---
library_name: stable-baselines3
tags:
- Walker2DBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 2120.20 +/- 6.34
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2DBulletEnv-v0
type: Walker2DBulletEnv-v0
---
# **PPO** Agent playing **Walker2DBulletEnv-v0**
This is a trained model of a **PPO** agent playing **Walker2DBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env Walker2DBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo ppo --env Walker2DBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env Walker2DBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env Walker2DBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('clip_range', 'lin_0.4'),
('ent_coef', 0.0),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gae_lambda', 0.92),
('gamma', 0.99),
('learning_rate', 3e-05),
('max_grad_norm', 0.5),
('n_envs', 16),
('n_epochs', 20),
('n_steps', 512),
('n_timesteps', 2000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('policy_kwargs',
'dict(log_std_init=-2, ortho_init=False, activation_fn=nn.ReLU, '
'net_arch=[dict(pi=[256, 256], vf=[256, 256])] )'),
('sde_sample_freq', 4),
('use_sde', True),
('vf_coef', 0.5),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ppo-QbertNoFrameskip-v4 | sb3 | 2022-10-11T15:11:47Z | 12 | 0 | stable-baselines3 | [
"stable-baselines3",
"QbertNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T12:43:00Z | ---
library_name: stable-baselines3
tags:
- QbertNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 15542.50 +/- 2987.58
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: QbertNoFrameskip-v4
type: QbertNoFrameskip-v4
---
# **PPO** Agent playing **QbertNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **QbertNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env QbertNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo ppo --env QbertNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env QbertNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env QbertNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
sb3/ppo-BreakoutNoFrameskip-v4 | sb3 | 2022-10-11T15:11:37Z | 51 | 0 | stable-baselines3 | [
"stable-baselines3",
"BreakoutNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T12:05:04Z | ---
library_name: stable-baselines3
tags:
- BreakoutNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 398.00 +/- 16.30
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BreakoutNoFrameskip-v4
type: BreakoutNoFrameskip-v4
---
# **PPO** Agent playing **BreakoutNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **BreakoutNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env BreakoutNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo ppo --env BreakoutNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env BreakoutNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env BreakoutNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
sb3/ppo-HalfCheetahBulletEnv-v0 | sb3 | 2022-10-11T15:11:33Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"HalfCheetahBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T12:04:03Z | ---
library_name: stable-baselines3
tags:
- HalfCheetahBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 2871.46 +/- 69.65
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetahBulletEnv-v0
type: HalfCheetahBulletEnv-v0
---
# **PPO** Agent playing **HalfCheetahBulletEnv-v0**
This is a trained model of a **PPO** agent playing **HalfCheetahBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env HalfCheetahBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo ppo --env HalfCheetahBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env HalfCheetahBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env HalfCheetahBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('clip_range', 0.4),
('ent_coef', 0.0),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 3e-05),
('max_grad_norm', 0.5),
('n_envs', 16),
('n_epochs', 20),
('n_steps', 512),
('n_timesteps', 2000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('policy_kwargs',
'dict(log_std_init=-2, ortho_init=False, activation_fn=nn.ReLU, '
'net_arch=[dict(pi=[256, 256], vf=[256, 256])] )'),
('sde_sample_freq', 4),
('use_sde', True),
('vf_coef', 0.5),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ppo-HopperBulletEnv-v0 | sb3 | 2022-10-11T15:11:28Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"HopperBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T12:03:04Z | ---
library_name: stable-baselines3
tags:
- HopperBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 2431.28 +/- 574.33
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HopperBulletEnv-v0
type: HopperBulletEnv-v0
---
# **PPO** Agent playing **HopperBulletEnv-v0**
This is a trained model of a **PPO** agent playing **HopperBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env HopperBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo ppo --env HopperBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env HopperBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env HopperBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('clip_range', 'lin_0.4'),
('ent_coef', 0.0),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gae_lambda', 0.92),
('gamma', 0.99),
('learning_rate', 3e-05),
('max_grad_norm', 0.5),
('n_envs', 16),
('n_epochs', 20),
('n_steps', 512),
('n_timesteps', 2000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('policy_kwargs',
'dict(log_std_init=-2, ortho_init=False, activation_fn=nn.ReLU, '
'net_arch=[dict(pi=[256, 256], vf=[256, 256])] )'),
('sde_sample_freq', 4),
('use_sde', True),
('vf_coef', 0.5),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ppo-SpaceInvadersNoFrameskip-v4 | sb3 | 2022-10-11T15:11:23Z | 16 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T12:02:09Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 886.50 +/- 417.30
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **PPO** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env SpaceInvadersNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
sb3/ppo-RoadRunnerNoFrameskip-v4 | sb3 | 2022-10-11T15:11:13Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"RoadRunnerNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T12:00:25Z | ---
library_name: stable-baselines3
tags:
- RoadRunnerNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 970.00 +/- 45.83
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RoadRunnerNoFrameskip-v4
type: RoadRunnerNoFrameskip-v4
---
# **PPO** Agent playing **RoadRunnerNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **RoadRunnerNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env RoadRunnerNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo ppo --env RoadRunnerNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env RoadRunnerNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env RoadRunnerNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
sb3/ppo-Hopper-v3 | sb3 | 2022-10-11T15:11:08Z | 117 | 0 | stable-baselines3 | [
"stable-baselines3",
"Hopper-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T11:59:26Z | ---
library_name: stable-baselines3
tags:
- Hopper-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 2410.11 +/- 9.86
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v3
type: Hopper-v3
---
# **PPO** Agent playing **Hopper-v3**
This is a trained model of a **PPO** agent playing **Hopper-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env Hopper-v3 -orga sb3 -f logs/
python enjoy.py --algo ppo --env Hopper-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env Hopper-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env Hopper-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('clip_range', 0.2),
('ent_coef', 0.00229519),
('gae_lambda', 0.99),
('gamma', 0.999),
('learning_rate', 9.80828e-05),
('max_grad_norm', 0.7),
('n_envs', 1),
('n_epochs', 5),
('n_steps', 512),
('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('policy_kwargs',
'dict( log_std_init=-2, ortho_init=False, activation_fn=nn.ReLU, '
'net_arch=[dict(pi=[256, 256], vf=[256, 256])] )'),
('vf_coef', 0.835671),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ppo-EnduroNoFrameskip-v4 | sb3 | 2022-10-11T15:11:04Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"EnduroNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T11:55:15Z | ---
library_name: stable-baselines3
tags:
- EnduroNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 877.20 +/- 218.86
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: EnduroNoFrameskip-v4
type: EnduroNoFrameskip-v4
---
# **PPO** Agent playing **EnduroNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **EnduroNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env EnduroNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo ppo --env EnduroNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env EnduroNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env EnduroNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
sb3/ppo-Swimmer-v3 | sb3 | 2022-10-11T15:10:59Z | 120 | 0 | stable-baselines3 | [
"stable-baselines3",
"Swimmer-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T11:54:23Z | ---
library_name: stable-baselines3
tags:
- Swimmer-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 281.78 +/- 11.86
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v3
type: Swimmer-v3
---
# **PPO** Agent playing **Swimmer-v3**
This is a trained model of a **PPO** agent playing **Swimmer-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env Swimmer-v3 -orga sb3 -f logs/
python enjoy.py --algo ppo --env Swimmer-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env Swimmer-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env Swimmer-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.9999),
('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ppo-Acrobot-v1 | sb3 | 2022-10-11T15:10:49Z | 2,555 | 1 | stable-baselines3 | [
"stable-baselines3",
"Acrobot-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T11:35:25Z | ---
library_name: stable-baselines3
tags:
- Acrobot-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -74.60 +/- 11.48
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Acrobot-v1
type: Acrobot-v1
---
# **PPO** Agent playing **Acrobot-v1**
This is a trained model of a **PPO** agent playing **Acrobot-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env Acrobot-v1 -orga sb3 -f logs/
python enjoy.py --algo ppo --env Acrobot-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env Acrobot-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env Acrobot-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('gae_lambda', 0.94),
('gamma', 0.99),
('n_envs', 16),
('n_epochs', 4),
('n_steps', 256),
('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ppo-LunarLander-v2 | sb3 | 2022-10-11T15:10:44Z | 2,136 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T11:18:11Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 233.56 +/- 53.89
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env LunarLander-v2 -orga sb3 -f logs/
python enjoy.py --algo ppo --env LunarLander-v2 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env LunarLander-v2 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env LunarLander-v2 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('ent_coef', 0.01),
('gae_lambda', 0.98),
('gamma', 0.999),
('n_envs', 16),
('n_epochs', 4),
('n_steps', 1024),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
sb3/ppo-PongNoFrameskip-v4 | sb3 | 2022-10-11T15:10:39Z | 32 | 1 | stable-baselines3 | [
"stable-baselines3",
"PongNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T11:16:21Z | ---
library_name: stable-baselines3
tags:
- PongNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 21.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PongNoFrameskip-v4
type: PongNoFrameskip-v4
---
# **PPO** Agent playing **PongNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **PongNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env PongNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo ppo --env PongNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env PongNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env PongNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
sb3/trpo-Ant-v3 | sb3 | 2022-10-11T15:10:34Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"Ant-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T11:15:31Z | ---
library_name: stable-baselines3
tags:
- Ant-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TRPO
results:
- metrics:
- type: mean_reward
value: 4735.93 +/- 1018.56
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v3
type: Ant-v3
---
# **TRPO** Agent playing **Ant-v3**
This is a trained model of a **TRPO** agent playing **Ant-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env Ant-v3 -orga sb3 -f logs/
python enjoy.py --algo trpo --env Ant-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo trpo --env Ant-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env Ant-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('cg_damping', 0.1),
('cg_max_steps', 25),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.001),
('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('sub_sampling_factor', 1),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/trpo-LunarLanderContinuous-v2 | sb3 | 2022-10-11T15:10:10Z | 11 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLanderContinuous-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T11:11:26Z | ---
library_name: stable-baselines3
tags:
- LunarLanderContinuous-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TRPO
results:
- metrics:
- type: mean_reward
value: 273.95 +/- 23.86
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLanderContinuous-v2
type: LunarLanderContinuous-v2
---
# **TRPO** Agent playing **LunarLanderContinuous-v2**
This is a trained model of a **TRPO** agent playing **LunarLanderContinuous-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env LunarLanderContinuous-v2 -orga sb3 -f logs/
python enjoy.py --algo trpo --env LunarLanderContinuous-v2 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo trpo --env LunarLanderContinuous-v2 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env LunarLanderContinuous-v2 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 200000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/trpo-HopperBulletEnv-v0 | sb3 | 2022-10-11T15:09:55Z | 8 | 0 | stable-baselines3 | [
"stable-baselines3",
"HopperBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T11:02:07Z | ---
library_name: stable-baselines3
tags:
- HopperBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TRPO
results:
- metrics:
- type: mean_reward
value: 2650.38 +/- 63.62
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HopperBulletEnv-v0
type: HopperBulletEnv-v0
---
# **TRPO** Agent playing **HopperBulletEnv-v0**
This is a trained model of a **TRPO** agent playing **HopperBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env HopperBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo trpo --env HopperBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo trpo --env HopperBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env HopperBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('cg_damping', 0.1),
('cg_max_steps', 25),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.001),
('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('sub_sampling_factor', 1),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/trpo-ReacherBulletEnv-v0 | sb3 | 2022-10-11T15:09:51Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"ReacherBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T11:01:11Z | ---
library_name: stable-baselines3
tags:
- ReacherBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TRPO
results:
- metrics:
- type: mean_reward
value: 16.05 +/- 9.75
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: ReacherBulletEnv-v0
type: ReacherBulletEnv-v0
---
# **TRPO** Agent playing **ReacherBulletEnv-v0**
This is a trained model of a **TRPO** agent playing **ReacherBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env ReacherBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo trpo --env ReacherBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo trpo --env ReacherBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env ReacherBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('cg_damping', 0.1),
('cg_max_steps', 25),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.001),
('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 300000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('policy_kwargs',
'dict(log_std_init=-1, ortho_init=False, activation_fn=nn.ReLU, '
'net_arch=[dict(pi=[256, 256], vf=[256, 256])] )'),
('sub_sampling_factor', 1),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/trpo-Acrobot-v1 | sb3 | 2022-10-11T15:09:31Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"Acrobot-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T10:56:52Z | ---
library_name: stable-baselines3
tags:
- Acrobot-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TRPO
results:
- metrics:
- type: mean_reward
value: -87.40 +/- 12.60
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Acrobot-v1
type: Acrobot-v1
---
# **TRPO** Agent playing **Acrobot-v1**
This is a trained model of a **TRPO** agent playing **Acrobot-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env Acrobot-v1 -orga sb3 -f logs/
python enjoy.py --algo trpo --env Acrobot-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo trpo --env Acrobot-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env Acrobot-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 100000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/trpo-LunarLander-v2 | sb3 | 2022-10-11T15:09:26Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T10:56:01Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TRPO
results:
- metrics:
- type: mean_reward
value: 130.42 +/- 106.61
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **TRPO** Agent playing **LunarLander-v2**
This is a trained model of a **TRPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env LunarLander-v2 -orga sb3 -f logs/
python enjoy.py --algo trpo --env LunarLander-v2 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo trpo --env LunarLander-v2 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env LunarLander-v2 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('cg_damping', 0.01),
('gae_lambda', 0.98),
('gamma', 0.99),
('learning_rate', 0.001),
('n_critic_updates', 15),
('n_envs', 2),
('n_steps', 512),
('n_timesteps', 200000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
sb3/ppo-MountainCar-v0 | sb3 | 2022-10-11T15:09:17Z | 1,989 | 1 | stable-baselines3 | [
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-26T19:59:34Z | ---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -108.20 +/- 8.16
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env MountainCar-v0 -orga sb3 -f logs/
python enjoy.py --algo ppo --env MountainCar-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env MountainCar-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env MountainCar-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('gae_lambda', 0.98),
('gamma', 0.99),
('n_envs', 16),
('n_epochs', 4),
('n_steps', 16),
('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/tqc-Pendulum-v1 | sb3 | 2022-10-11T15:09:12Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"Pendulum-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-22T20:35:42Z | ---
library_name: stable-baselines3
tags:
- Pendulum-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: -171.32 +/- 96.54
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pendulum-v1
type: Pendulum-v1
---
# **TQC** Agent playing **Pendulum-v1**
This is a trained model of a **TQC** agent playing **Pendulum-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Pendulum-v1 -orga sb3 -f logs/
python enjoy.py --algo tqc --env Pendulum-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env Pendulum-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Pendulum-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('learning_rate', 0.001),
('n_timesteps', 20000),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
sb3/a2c-MountainCarContinuous-v0 | sb3 | 2022-10-11T15:08:52Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"MountainCarContinuous-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-20T07:34:38Z | ---
library_name: stable-baselines3
tags:
- MountainCarContinuous-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 91.17 +/- 0.26
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCarContinuous-v0
type: MountainCarContinuous-v0
---
# **A2C** Agent playing **MountainCarContinuous-v0**
This is a trained model of a **A2C** agent playing **MountainCarContinuous-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env MountainCarContinuous-v0 -orga sb3 -f logs/
python enjoy.py --algo a2c --env MountainCarContinuous-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env MountainCarContinuous-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env MountainCarContinuous-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('n_envs', 4),
('n_steps', 100),
('n_timesteps', 100000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=0.0, ortho_init=False)'),
('sde_sample_freq', 16),
('use_sde', True),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ppo-MsPacmanNoFrameskip-v4 | sb3 | 2022-10-11T15:08:28Z | 16 | 1 | stable-baselines3 | [
"stable-baselines3",
"MsPacmanNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-08-05T19:40:35Z | ---
library_name: stable-baselines3
tags:
- MsPacmanNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 1659.00 +/- 144.81
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MsPacmanNoFrameskip-v4
type: MsPacmanNoFrameskip-v4
---
# **PPO** Agent playing **MsPacmanNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **MsPacmanNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env MsPacmanNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo ppo --env MsPacmanNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env MsPacmanNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env MsPacmanNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
sb3/a2c-MsPacmanNoFrameskip-v4 | sb3 | 2022-10-11T15:08:23Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"MsPacmanNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-08-05T19:36:37Z | ---
library_name: stable-baselines3
tags:
- MsPacmanNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 2019.00 +/- 922.04
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MsPacmanNoFrameskip-v4
type: MsPacmanNoFrameskip-v4
---
# **A2C** Agent playing **MsPacmanNoFrameskip-v4**
This is a trained model of a **A2C** agent playing **MsPacmanNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env MsPacmanNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo a2c --env MsPacmanNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env MsPacmanNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env MsPacmanNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('n_envs', 16),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(optimizer_class=RMSpropTFLike, '
'optimizer_kwargs=dict(eps=1e-5))'),
('vf_coef', 0.25),
('normalize', False)])
```
|
sb3/a2c-Humanoid-v3 | sb3 | 2022-10-11T15:08:18Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"Humanoid-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-10T12:19:45Z | ---
library_name: stable-baselines3
tags:
- Humanoid-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 380.12 +/- 81.26
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v3
type: Humanoid-v3
---
# **A2C** Agent playing **Humanoid-v3**
This is a trained model of a **A2C** agent playing **Humanoid-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env Humanoid-v3 -orga sb3 -f logs/
python enjoy.py --algo a2c --env Humanoid-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env Humanoid-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env Humanoid-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('n_timesteps', 2000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/tqc-parking-v0 | sb3 | 2022-10-11T15:08:08Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"parking-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T20:57:08Z | ---
library_name: stable-baselines3
tags:
- parking-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: -7.14 +/- 3.23
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: parking-v0
type: parking-v0
---
# **TQC** Agent playing **parking-v0**
This is a trained model of a **TQC** agent playing **parking-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env parking-v0 -orga sb3 -f logs/
python enjoy.py --algo tqc --env parking-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env parking-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env parking-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('learning_rate', 0.0015),
('n_timesteps', 100000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=False, goal_selection_strategy='episode', "
'n_sampled_goal=4, max_episode_length=100 )'),
('tau', 0.005),
('normalize', False)])
```
|
sb3/ddpg-Pendulum-v1 | sb3 | 2022-10-11T15:07:59Z | 30 | 0 | stable-baselines3 | [
"stable-baselines3",
"Pendulum-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T20:22:09Z | ---
library_name: stable-baselines3
tags:
- Pendulum-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DDPG
results:
- metrics:
- type: mean_reward
value: -211.65 +/- 134.05
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pendulum-v1
type: Pendulum-v1
---
# **DDPG** Agent playing **Pendulum-v1**
This is a trained model of a **DDPG** agent playing **Pendulum-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ddpg --env Pendulum-v1 -orga sb3 -f logs/
python enjoy.py --algo ddpg --env Pendulum-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ddpg --env Pendulum-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ddpg --env Pendulum-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('buffer_size', 200000),
('gamma', 0.98),
('gradient_steps', -1),
('learning_rate', 0.001),
('learning_starts', 10000),
('n_timesteps', 20000),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[400, 300])'),
('train_freq', [1, 'episode']),
('normalize', False)])
```
|
sb3/dqn-LunarLander-v2 | sb3 | 2022-10-11T15:07:39Z | 2,501 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T17:49:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 136.79 +/- 42.72
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env LunarLander-v2 -orga sb3 -f logs/
python enjoy.py --algo dqn --env LunarLander-v2 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env LunarLander-v2 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env LunarLander-v2 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 50000),
('exploration_final_eps', 0.1),
('exploration_fraction', 0.12),
('gamma', 0.99),
('gradient_steps', -1),
('learning_rate', 0.00063),
('learning_starts', 0),
('n_timesteps', 100000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[256, 256])'),
('target_update_interval', 250),
('train_freq', 4),
('normalize', False)])
```
|
sb3/a2c-HalfCheetahBulletEnv-v0 | sb3 | 2022-10-11T15:07:25Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"HalfCheetahBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T15:27:51Z | ---
library_name: stable-baselines3
tags:
- HalfCheetahBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 2112.18 +/- 34.59
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetahBulletEnv-v0
type: HalfCheetahBulletEnv-v0
---
# **A2C** Agent playing **HalfCheetahBulletEnv-v0**
This is a trained model of a **A2C** agent playing **HalfCheetahBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env HalfCheetahBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo a2c --env HalfCheetahBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env HalfCheetahBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env HalfCheetahBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.0),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.00096'),
('max_grad_norm', 0.5),
('n_envs', 4),
('n_steps', 8),
('n_timesteps', 2000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs',
'dict(log_std_init=-2, ortho_init=False, full_std=True)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ppo-LunarLanderContinuous-v2 | sb3 | 2022-10-11T15:07:10Z | 2,002 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLanderContinuous-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T12:08:37Z | ---
library_name: stable-baselines3
tags:
- LunarLanderContinuous-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 274.47 +/- 24.37
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLanderContinuous-v2
type: LunarLanderContinuous-v2
---
# **PPO** Agent playing **LunarLanderContinuous-v2**
This is a trained model of a **PPO** agent playing **LunarLanderContinuous-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env LunarLanderContinuous-v2 -orga sb3 -f logs/
python enjoy.py --algo ppo --env LunarLanderContinuous-v2 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env LunarLanderContinuous-v2 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env LunarLanderContinuous-v2 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('ent_coef', 0.01),
('gae_lambda', 0.98),
('gamma', 0.999),
('n_envs', 16),
('n_epochs', 4),
('n_steps', 1024),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
sb3/ppo-AsteroidsNoFrameskip-v4 | sb3 | 2022-10-11T15:07:05Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"AsteroidsNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T12:07:42Z | ---
library_name: stable-baselines3
tags:
- AsteroidsNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 2439.00 +/- 590.89
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AsteroidsNoFrameskip-v4
type: AsteroidsNoFrameskip-v4
---
# **PPO** Agent playing **AsteroidsNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **AsteroidsNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env AsteroidsNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo ppo --env AsteroidsNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env AsteroidsNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env AsteroidsNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
sb3/ppo-Walker2d-v3 | sb3 | 2022-10-11T15:07:01Z | 111 | 0 | stable-baselines3 | [
"stable-baselines3",
"Walker2d-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T11:17:27Z | ---
library_name: stable-baselines3
tags:
- Walker2d-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 3571.74 +/- 807.75
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2d-v3
type: Walker2d-v3
---
# **PPO** Agent playing **Walker2d-v3**
This is a trained model of a **PPO** agent playing **Walker2d-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env Walker2d-v3 -orga sb3 -f logs/
python enjoy.py --algo ppo --env Walker2d-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env Walker2d-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env Walker2d-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/dqn-MsPacmanNoFrameskip-v4 | sb3 | 2022-10-11T15:06:46Z | 129 | 0 | stable-baselines3 | [
"stable-baselines3",
"MsPacmanNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-08-05T19:42:08Z | ---
library_name: stable-baselines3
tags:
- MsPacmanNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 2682.00 +/- 475.29
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MsPacmanNoFrameskip-v4
type: MsPacmanNoFrameskip-v4
---
# **DQN** Agent playing **MsPacmanNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **MsPacmanNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env MsPacmanNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
sb3/td3-AntBulletEnv-v0 | sb3 | 2022-10-11T15:06:41Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T13:03:37Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TD3
results:
- metrics:
- type: mean_reward
value: 3262.99 +/- 64.99
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **TD3** Agent playing **AntBulletEnv-v0**
This is a trained model of a **TD3** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo td3 --env AntBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo td3 --env AntBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo td3 --env AntBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo td3 --env AntBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('buffer_size', 200000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('gradient_steps', -1),
('learning_rate', 0.001),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[400, 300])'),
('train_freq', [1, 'episode']),
('normalize', False)])
```
|
sb3/dqn-PongNoFrameskip-v4 | sb3 | 2022-10-11T15:06:36Z | 726 | 1 | stable-baselines3 | [
"stable-baselines3",
"PongNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T17:48:01Z | ---
library_name: stable-baselines3
tags:
- PongNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 20.70 +/- 0.46
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PongNoFrameskip-v4
type: PongNoFrameskip-v4
---
# **DQN** Agent playing **PongNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **PongNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env PongNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo dqn --env PongNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env PongNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env PongNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
sb3/a2c-PongNoFrameskip-v4 | sb3 | 2022-10-11T15:06:32Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"PongNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T15:14:59Z | ---
library_name: stable-baselines3
tags:
- PongNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 17.10 +/- 2.70
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PongNoFrameskip-v4
type: PongNoFrameskip-v4
---
# **A2C** Agent playing **PongNoFrameskip-v4**
This is a trained model of a **A2C** agent playing **PongNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env PongNoFrameskip-v4 -orga sb3 -f logs/
python enjoy.py --algo a2c --env PongNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env PongNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env PongNoFrameskip-v4 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('n_envs', 16),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(optimizer_class=RMSpropTFLike, '
'optimizer_kwargs=dict(eps=1e-5))'),
('vf_coef', 0.25),
('normalize', False)])
```
|
sb3/tqc-FetchPickAndPlace-v1 | sb3 | 2022-10-11T15:06:27Z | 7 | 2 | stable-baselines3 | [
"stable-baselines3",
"FetchPickAndPlace-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-06-02T20:58:19Z | ---
library_name: stable-baselines3
tags:
- FetchPickAndPlace-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: -8.50 +/- 3.47
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FetchPickAndPlace-v1
type: FetchPickAndPlace-v1
---
# **TQC** Agent playing **FetchPickAndPlace-v1**
This is a trained model of a **TQC** agent playing **FetchPickAndPlace-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env FetchPickAndPlace-v1 -orga sb3 -f logs/
python enjoy.py --algo tqc --env FetchPickAndPlace-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env FetchPickAndPlace-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env FetchPickAndPlace-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('learning_rate', 0.001),
('n_timesteps', 1000000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='future', "
'n_sampled_goal=4, max_episode_length=100 )'),
('tau', 0.005),
('normalize', False)])
```
|
Subsets and Splits