modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-12 18:27:22
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-12 18:26:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ArthurZ/flan-ul2 | ArthurZ | 2023-03-06T11:32:22Z | 5 | 0 | transformers | [
"transformers",
"tf",
"jax",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-03-03T17:45:35Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: flan-ul2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# flan-ul2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- TensorFlow 2.9.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
agarvil/LunarLander | agarvil | 2023-03-06T11:29:18Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T11:28:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.73 +/- 20.21
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Theju/SID_CA_M05 | Theju | 2023-03-06T10:51:45Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-06T06:41:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: SID_CA_M05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SID_CA_M05
This model is a fine-tuned version of [Sjdan/cls_3ep1](https://huggingface.co/Sjdan/cls_3ep1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
jamesup/rl_course_vizdoom_health_gathering_supreme | jamesup | 2023-03-06T10:26:05Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T10:25:25Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.60 +/- 5.35
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r jamesup/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .home.jamesup.Documents.source.deep-rl-class.env.lib.python3.8.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .home.jamesup.Documents.source.deep-rl-class.env.lib.python3.8.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Nasree/q-Taxi-3 | Nasree | 2023-03-06T10:26:04Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T10:26:00Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.77
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Nasree/q-Taxi-3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Nasree/q-FrozenLake-v1-4x4-noSlippery | Nasree | 2023-03-06T10:23:14Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T10:23:10Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.67 +/- 0.47
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Nasree/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vocabtrimmer/mt5-small-trimmed-es-esquad-qg | vocabtrimmer | 2023-03-06T09:42:43Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question generation",
"es",
"dataset:lmqg/qg_esquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-03-06T09:42:34Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: es
datasets:
- lmqg/qg_esquad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "del <hl> Ministerio de Desarrollo Urbano <hl> , Gobierno de la India."
example_title: "Question Generation Example 1"
- text: "a <hl> noviembre <hl> , que es también la estación lluviosa."
example_title: "Question Generation Example 2"
- text: "como <hl> el gobierno de Abbott <hl> que asumió el cargo el 18 de septiembre de 2013."
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-es-esquad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_esquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 9.52
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 24.24
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 22.26
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 84.19
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 58.91
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-es-esquad-qg`
This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-es](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-es) for question generation task on the [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mt5-small-trimmed-es](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-es)
- **Language:** es
- **Training data:** [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="es", model="vocabtrimmer/mt5-small-trimmed-es-esquad-qg")
# model prediction
questions = model.generate_q(list_context="a noviembre , que es también la estación lluviosa.", list_answer="noviembre")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-es-esquad-qg")
output = pipe("del <hl> Ministerio de Desarrollo Urbano <hl> , Gobierno de la India.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-es-esquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 84.19 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| Bleu_1 | 25.92 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| Bleu_2 | 17.66 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| Bleu_3 | 12.76 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| Bleu_4 | 9.52 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| METEOR | 22.26 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| MoverScore | 58.91 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
| ROUGE_L | 24.24 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_esquad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: vocabtrimmer/mt5-small-trimmed-es
- max_length: 512
- max_length_output: 32
- epoch: 15
- batch: 32
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-es-esquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
pnparam/find_tr2_h | pnparam | 2023-03-06T09:19:59Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-06T08:26:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: find_tr2_h
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# find_tr2_h
This model is a fine-tuned version of [Sjdan/mst_1](https://huggingface.co/Sjdan/mst_1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
rossHuggingMay/q-Taxi-v3 | rossHuggingMay | 2023-03-06T09:18:23Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T09:18:20Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rossHuggingMay/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
swl-models/FJ_D | swl-models | 2023-03-06T09:16:21Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-06T09:16:21Z | ---
license: creativeml-openrail-m
duplicated_from: SakuraFoxKira/FJ_D
---
|
Sjdan/CA_SID_F05_2 | Sjdan | 2023-03-06T09:14:22Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-06T04:49:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: CA_SID_F05_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CA_SID_F05_2
This model is a fine-tuned version of [Sjdan/cls_3ep1](https://huggingface.co/Sjdan/cls_3ep1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Sjdan/CA_SID_M11_2 | Sjdan | 2023-03-06T09:12:06Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-06T05:09:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: CA_SID_M11_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CA_SID_M11_2
This model is a fine-tuned version of [Sjdan/cls_3ep1](https://huggingface.co/Sjdan/cls_3ep1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
research-backup/xlm-roberta-large-trimmed-ar-30000 | research-backup | 2023-03-06T09:07:15Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-06T09:05:21Z | # Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-ar-30000`
This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-ar-30000 |
|:---------------------------|:--------------------|:--------------------------------------------------|
| parameter_size_full | 560,142,482 | 334,642,482 |
| parameter_size_embedding | 256,002,048 | 30,722,048 |
| vocab_size | 250,002 | 30,002 |
| compression_rate_full | 100.0 | 59.74 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ar | vocabtrimmer/mc4_validation | text | ar | validation | 30000 | 2 | |
angelinux/clipped-LunarLander-v2 | angelinux | 2023-03-06T09:01:13Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T08:59:47Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -115.71 +/- 55.75
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'angelinux/clipped-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
pnparam/SID_LOSO_M09_2 | pnparam | 2023-03-06T08:46:45Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-06T07:13:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: SID_LOSO_M09_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SID_LOSO_M09_2
This model is a fine-tuned version of [Sjdan/cls_3ep1](https://huggingface.co/Sjdan/cls_3ep1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Sjdan/CA_SID_M16_2 | Sjdan | 2023-03-06T08:20:52Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-06T04:27:48Z | ---
tags:
- generated_from_trainer
model-index:
- name: CA_SID_M16_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CA_SID_M16_2
This model is a fine-tuned version of [Sjdan/cls_3ep1](https://huggingface.co/Sjdan/cls_3ep1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Vorlde/ppo-LunarLander-v2 | Vorlde | 2023-03-06T08:10:29Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-05T18:13:19Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.41 +/- 17.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kraken2404/dqn-SpaceInvadersNoFrameskip-v4_v3 | kraken2404 | 2023-03-06T08:07:14Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T08:06:14Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 174.00 +/- 49.99
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kraken2404 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kraken2404 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kraken2404
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 110000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
bufan/tj | bufan | 2023-03-06T08:03:18Z | 0 | 0 | null | [
"zh",
"license:apache-2.0",
"region:us"
]
| null | 2023-03-06T07:59:46Z | ---
license: apache-2.0
language:
- zh
--- |
Alex48/poca-SoccerTwos-v4 | Alex48 | 2023-03-06T07:23:56Z | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-03-02T17:42:28Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Alex48/poca-SoccerTwos-v4
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Theju/SID_CA_M01 | Theju | 2023-03-06T07:22:34Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-06T06:21:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: SID_CA_M01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SID_CA_M01
This model is a fine-tuned version of [Sjdan/cls_3ep1](https://huggingface.co/Sjdan/cls_3ep1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
kraken2404/dqn-SpaceInvadersNoFrameskip-v4_v2 | kraken2404 | 2023-03-06T07:20:25Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T07:20:03Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 277.50 +/- 22.50
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kraken2404 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kraken2404 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kraken2404
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 90000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
plegg/poca-SoccerTwos-v2 | plegg | 2023-03-06T07:19:59Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-03-05T23:00:45Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: plegg/poca-SoccerTwos-v2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Theju/SID_CA_M07 | Theju | 2023-03-06T07:19:02Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-06T06:29:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: SID_CA_M07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SID_CA_M07
This model is a fine-tuned version of [Sjdan/cls_3ep1](https://huggingface.co/Sjdan/cls_3ep1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Theju/SID_CA_M04 | Theju | 2023-03-06T06:48:34Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-06T06:12:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: SID_CA_M04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SID_CA_M04
This model is a fine-tuned version of [Sjdan/cls_3ep1](https://huggingface.co/Sjdan/cls_3ep1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
research-backup/xlm-roberta-large-trimmed-de-90000 | research-backup | 2023-03-06T06:29:21Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-06T06:11:52Z | # Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-de-90000`
This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-de-90000 |
|:---------------------------|:--------------------|:--------------------------------------------------|
| parameter_size_full | 560,142,482 | 396,142,482 |
| parameter_size_embedding | 256,002,048 | 92,162,048 |
| vocab_size | 250,002 | 90,002 |
| compression_rate_full | 100.0 | 70.72 |
| compression_rate_embedding | 100.0 | 36.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| de | vocabtrimmer/mc4_validation | text | de | validation | 90000 | 2 | |
dadadadadatou/KbrDollLikeness | dadadadadatou | 2023-03-06T06:29:07Z | 0 | 6 | null | [
"lora",
"koreanDollLikness",
"japaneseDollLikness",
"taiwanDollLikness",
"license:openrail",
"region:us"
]
| null | 2023-03-06T03:49:29Z | ---
license: openrail
tags:
- lora
- koreanDollLikness
- japaneseDollLikness
- taiwanDollLikness
---
Backups for Kbr's Doll Likeness series model
Credits go to https://civitai.com/user/Kbr (account been deleted now) |
danielcwq/distilbert-base-uncased-finetuned-H2Physics | danielcwq | 2023-03-06T06:24:53Z | 126 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
]
| question-answering | 2023-02-25T06:47:49Z | ---
license: apache-2.0
inference: false
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-H2Physics
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-H2Physics
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8149
## Model description
This model was pretrained on my Anki cards for the H2 GCE A Levels (Singapore) syllabus, in the hopes of making it a Question and Answer chatbot.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 20 | 3.4296 |
| No log | 2.0 | 40 | 2.0993 |
| No log | 3.0 | 60 | 1.1277 |
| No log | 4.0 | 80 | 0.8149 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Ahmade/doctor_chatbot_v2 | Ahmade | 2023-03-06T06:17:14Z | 114 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-03-04T10:23:07Z | def chat(model, tokenizer):
print("type \"q\" to quit. Automatically quits after 5 messages")
for step in range(5):
message = input("MESSAGE: ")
if message in ["", "q"]: # if the user doesn't wanna talk
break
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(message + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids,
max_length=1000,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature = 0.8,
)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
kunalr63/ppo-LunarLander-v2 | kunalr63 | 2023-03-06T05:56:55Z | 2 | 1 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T05:56:31Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.85 +/- 21.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
eclaircies/ecolo-pas-ecolo-v0.2 | eclaircies | 2023-03-06T05:47:08Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"camembert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-03-06T05:46:40Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# ecolo-pas-ecolo-v0.2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("ecolo-pas-ecolo-v0.2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
enoreyes/sks-man | enoreyes | 2023-03-06T05:33:29Z | 8 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:SG161222/Realistic_Vision_V1.3",
"base_model:adapter:SG161222/Realistic_Vision_V1.3",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-03-06T05:33:25Z | ---
license: creativeml-openrail-m
base_model: SG161222/Realistic_Vision_V1.3_Fantasy.ai
instance_prompt: sks man
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - sks-man
These are LoRA adaption weights for [SG161222/Realistic_Vision_V1.3_Fantasy.ai](https://huggingface.co/SG161222/Realistic_Vision_V1.3_Fantasy.ai). The weights were trained on the instance prompt "sks man" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: Photo of sks man, medium closeup photo, handsome man, detailed (wrinkles, blemishes!, folds!, moles, viens, pores!!, skin imperfections:1.1), specular lighting, dslr, ultra quality, sharp focus, tack sharp, dof, film grain, centered, Fujifilm XT3, crystal clear




|
AdonaiHS/a2c-AntBulletEnv-v0 | AdonaiHS | 2023-03-06T05:28:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T05:26:58Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1020.35 +/- 127.50
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Hourai/q-FrozenLake-v1-4x4-noSlippery | Hourai | 2023-03-06T04:57:48Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T04:57:39Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Hourai/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
seungwoos/ppo-Huggy | seungwoos | 2023-03-06T04:56:43Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-03-06T04:56:36Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: seungwoos/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sd-concepts-library/cookiesmore | sd-concepts-library | 2023-03-06T04:48:40Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2023-03-06T04:48:38Z | ---
license: mit
---
### cookiesmore on Stable Diffusion
This is the `<cookie-photo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
parsa96/distilbert-base-uncased-finetuned-emotion | parsa96 | 2023-03-06T04:42:19Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-05T06:03:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9281573845269205
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2144
- Accuracy: 0.928
- F1: 0.9282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8343 | 1.0 | 250 | 0.3130 | 0.911 | 0.9087 |
| 0.2517 | 2.0 | 500 | 0.2144 | 0.928 | 0.9282 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
research-backup/xlm-roberta-large-trimmed-pt-60000 | research-backup | 2023-03-06T04:40:21Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-06T04:24:43Z | # Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-pt-60000`
This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-pt-60000 |
|:---------------------------|:--------------------|:--------------------------------------------------|
| parameter_size_full | 560,142,482 | 365,392,482 |
| parameter_size_embedding | 256,002,048 | 61,442,048 |
| vocab_size | 250,002 | 60,002 |
| compression_rate_full | 100.0 | 65.23 |
| compression_rate_embedding | 100.0 | 24.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| pt | vocabtrimmer/mc4_validation | text | pt | validation | 60000 | 2 | |
research-backup/xlm-roberta-large-trimmed-es-60000 | research-backup | 2023-03-06T04:05:11Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-06T03:49:25Z | # Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-es-60000`
This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-es-60000 |
|:---------------------------|:--------------------|:--------------------------------------------------|
| parameter_size_full | 560,142,482 | 365,392,482 |
| parameter_size_embedding | 256,002,048 | 61,442,048 |
| vocab_size | 250,002 | 60,002 |
| compression_rate_full | 100.0 | 65.23 |
| compression_rate_embedding | 100.0 | 24.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | 60000 | 2 | |
jojoUla/bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-40-2 | jojoUla | 2023-03-06T04:01:15Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-06T03:56:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-40-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-40-2
This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-refute-no-label-40](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-refute-no-label-40) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.04 | 1.0 | 1 | 7.6410 |
| 6.4001 | 2.0 | 2 | 0.1745 |
| 4.9113 | 3.0 | 3 | 3.0671 |
| 3.8173 | 4.0 | 4 | 0.1307 |
| 3.231 | 5.0 | 5 | 4.0186 |
| 3.0906 | 6.0 | 6 | 0.0018 |
| 1.8898 | 7.0 | 7 | 0.9425 |
| 2.2709 | 8.0 | 8 | 0.2500 |
| 1.6371 | 9.0 | 9 | 4.0546 |
| 1.6533 | 10.0 | 10 | 0.3071 |
| 1.9309 | 11.0 | 11 | 1.8665 |
| 1.1357 | 12.0 | 12 | 0.9965 |
| 0.9922 | 13.0 | 13 | 0.4232 |
| 0.5621 | 14.0 | 14 | 0.4225 |
| 2.0588 | 15.0 | 15 | 1.2267 |
| 1.6497 | 16.0 | 16 | 0.0952 |
| 1.5047 | 17.0 | 17 | 1.1569 |
| 0.9653 | 18.0 | 18 | 0.7288 |
| 0.8737 | 19.0 | 19 | 2.7634 |
| 0.9605 | 20.0 | 20 | 0.3847 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
research-backup/xlm-roberta-large-trimmed-de-60000 | research-backup | 2023-03-06T03:47:09Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-06T03:31:38Z | # Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-de-60000`
This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-de-60000 |
|:---------------------------|:--------------------|:--------------------------------------------------|
| parameter_size_full | 560,142,482 | 365,392,482 |
| parameter_size_embedding | 256,002,048 | 61,442,048 |
| vocab_size | 250,002 | 60,002 |
| compression_rate_full | 100.0 | 65.23 |
| compression_rate_embedding | 100.0 | 24.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| de | vocabtrimmer/mc4_validation | text | de | validation | 60000 | 2 | |
primasr/indobert-for-eqa-finetuned | primasr | 2023-03-06T03:38:56Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"ms",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-24T03:20:31Z | ---
language:
- ms
datasets:
- squad_v2
metrics:
- exact_match
- f1
---
# Overview
This model is an experiment I and my friend did as a researcher internship at the National University of Singapore (NUS). We finetuned the model to our datasets in Finance and Healthcare domain, in the Malay Language.
# Details
- Finetuned from the base model by [Rifky](https://huggingface.co/Rifky/Indobert-QA)
- The base datasets from [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/)
- Our [datasets](https://ids.nus.edu.sg/microsites/nzsg-nlp/datahub.html) in Finance and Healthcare domain
# Finetuned Detail
```py
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='test_trainer',
evaluation_strategy='epoch',
num_train_epochs=20,
optim='adamw_torch',
report_to='all',
logging_steps=1,
)
```
# How to use the Model
```py
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "primasr/indobert-for-eqa-finetuned"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
nlp = pipeline("question-answering", model=model, tokenizer=tokenizer)
``` |
SyedAbdul/RFL-taxi | SyedAbdul | 2023-03-06T03:30:02Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T03:30:00Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RFL-taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="SyedAbdul/RFL-taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
research-backup/xlm-roberta-large-trimmed-fr-60000 | research-backup | 2023-03-06T03:29:20Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-06T03:13:50Z | # Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-fr-60000`
This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-fr-60000 |
|:---------------------------|:--------------------|:--------------------------------------------------|
| parameter_size_full | 560,142,482 | 365,392,482 |
| parameter_size_embedding | 256,002,048 | 61,442,048 |
| vocab_size | 250,002 | 60,002 |
| compression_rate_full | 100.0 | 65.23 |
| compression_rate_embedding | 100.0 | 24.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 60000 | 2 | |
SyedAbdul/RFL-FrozenLake-v1-4x4-noSlippery | SyedAbdul | 2023-03-06T03:28:33Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T03:28:32Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RFL-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="SyedAbdul/RFL-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Cyber-Machine/LunarLander-v2 | Cyber-Machine | 2023-03-06T03:09:04Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T02:57:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.54 +/- 18.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kelestemur/ppo-PyramidsRND | kelestemur | 2023-03-06T03:05:16Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-03-06T03:03:30Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: kelestemur/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mcaoun/dqn-spaceinvaders | mcaoun | 2023-03-06T03:03:32Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-03T23:18:47Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: QRDQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 1603.00 +/- 724.03
name: mean_reward
verified: false
---
# **QRDQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **QRDQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga mcaoun -f logs/
python -m rl_zoo3.enjoy --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga mcaoun -f logs/
python -m rl_zoo3.enjoy --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mcaoun
```
## Hyperparameters
```python
OrderedDict([('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_fraction', 0.025),
('frame_stack', 4),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('normalize', False)])
```
|
research-backup/xlm-roberta-large-trimmed-pt | research-backup | 2023-03-06T02:37:23Z | 144 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-06T02:21:24Z | # Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-pt`
This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-pt |
|:---------------------------|:--------------------|:--------------------------------------------|
| parameter_size_full | 560,142,482 | 372,108,282 |
| parameter_size_embedding | 256,002,048 | 68,151,296 |
| vocab_size | 250,002 | 66,554 |
| compression_rate_full | 100.0 | 66.43 |
| compression_rate_embedding | 100.0 | 26.62 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| pt | vocabtrimmer/mc4_validation | text | pt | validation | | 2 | |
yenpolin/bonito-wav2vec2-tiny-demo | yenpolin | 2023-03-06T02:31:07Z | 137 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"dna_r9.4.1",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-05T05:51:37Z | ---
tags:
- automatic-speech-recognition
- dna_r9.4.1
- generated_from_trainer
model-index:
- name: bonito-wav2vec2-tiny-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bonito-wav2vec2-tiny-demo
This model is a fine-tuned version of [yenpolin/bonito-wav2vec2-tiny](https://huggingface.co/yenpolin/bonito-wav2vec2-tiny) on the DNA_R9.4.1 - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1499
- Mean Acc: 0.0
- Median Acc: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 320
- eval_batch_size: 768
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Acc | Median Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|
| No log | 0.51 | 160 | 1.1511 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Hourai/ppo-Huggy | Hourai | 2023-03-06T02:26:53Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-03-06T02:26:47Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Hourai/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bkhan2000/Reinforce-CartPole-v1 | bkhan2000 | 2023-03-06T02:21:02Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T02:18:23Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 450.70 +/- 67.31
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kelestemur/ppo-SnowballTarget | kelestemur | 2023-03-06T02:12:05Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-03-06T02:12:00Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: kelestemur/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
research-backup/xlm-roberta-large-trimmed-es | research-backup | 2023-03-06T01:44:28Z | 150 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-06T01:27:22Z | # Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-es`
This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-es |
|:---------------------------|:--------------------|:--------------------------------------------|
| parameter_size_full | 560,142,482 | 393,147,432 |
| parameter_size_embedding | 256,002,048 | 89,169,920 |
| vocab_size | 250,002 | 87,080 |
| compression_rate_full | 100.0 | 70.19 |
| compression_rate_embedding | 100.0 | 34.83 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | | 2 | |
hasarinduperera/ppo-PyramidsRND | hasarinduperera | 2023-03-06T01:24:51Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-03-06T01:24:45Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: hasarinduperera/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Seltion/ASDDD | Seltion | 2023-03-06T01:22:06Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-06T01:21:30Z | ---
license: creativeml-openrail-m
---
|
gabriellabollici/t5-base-neutralization | gabriellabollici | 2023-03-06T01:20:00Z | 98 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"simplification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-03-05T21:07:30Z | ---
license: apache-2.0
tags:
- simplification
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-base-neutralization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-neutralization
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0397
- Bleu: 52.5188
- Gen Len: 17.8333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 440 | 0.0482 | 52.3369 | 17.8125 |
| 0.1413 | 2.0 | 880 | 0.0397 | 52.5188 | 17.8333 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Kentris/Taxi-v3 | Kentris | 2023-03-06T01:03:59Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-06T01:03:58Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Kentris/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
alexsha/t5-large-finetuned-English-to-BASH-NL2BASH-customv2 | alexsha | 2023-03-05T23:45:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-03-05T19:57:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-large-finetuned-English-to-BASH-NL2BASH-customv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-finetuned-English-to-BASH-NL2BASH-customv2
This model is a fine-tuned version of [alexsha/t5-large-finetuned-English-to-BASH](https://huggingface.co/alexsha/t5-large-finetuned-English-to-BASH) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4279
- Nl2bash M: 0.2836
- Gen Len: 15.3647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 9
- eval_batch_size: 9
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Nl2bash M | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| No log | 1.0 | 76 | 2.3263 | 0.1265 | 15.6353 |
| No log | 2.0 | 152 | 1.8083 | 0.1575 | 15.6235 |
| No log | 3.0 | 228 | 1.5713 | 0.2088 | 15.4 |
| No log | 4.0 | 304 | 1.4584 | 0.2622 | 15.3647 |
| No log | 5.0 | 380 | 1.4279 | 0.2836 | 15.3647 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.6.1
- Tokenizers 0.11.0
|
neatbullshit/Reinforce-Helicopter | neatbullshit | 2023-03-05T23:44:25Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-05T21:39:02Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Helicopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.50 +/- 33.69
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kestrel256/q-pixelcopter-reinforce | kestrel256 | 2023-03-05T23:16:36Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-05T23:16:33Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: q-pixelcopter-reinforce
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 23.40 +/- 23.05
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
systash/autotrain-fake_news_fine_tuned_v4-38998102353 | systash | 2023-03-05T23:15:24Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:systash/autotrain-data-fake_news_fine_tuned_v4",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-05T08:57:00Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "Enter text"
datasets:
- systash/autotrain-data-fake_news_fine_tuned_v4
co2_eq_emissions:
emissions: 0.007112583756560004
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 38998102353
- CO2 Emissions (in grams): 0.0071
## Validation Metrics
- Loss: 0.091
- Accuracy: 0.983
- Precision: 0.986
- Recall: 0.979
- AUC: 0.998
- F1: 0.982
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/systash/autotrain-fake_news_fine_tuned_v4-38998102353
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("systash/autotrain-fake_news_fine_tuned_v4-38998102353", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("systash/autotrain-fake_news_fine_tuned_v4-38998102353", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Bitsy/Not-LLaMA-7B-Pytorch-Transformer-Compatible | Bitsy | 2023-03-05T22:56:58Z | 9 | 6 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-03-05T21:36:29Z | This is NOT the LLaMA model released recently converted to work with Transformers. It is NOT that. Simply use this model as you would any other now. Below is an example:
tokenizer = transformers.LLaMATokenizer.from_pretrained("Bitsy/Not-LLaMA-7B-Pytorch-Transformer-Compatible")
model = transformers.LLaMAForCausalLM.from_pretrained("Bitsy/Not-LLaMA-7B-Pytorch-Transformer-Compatible") |
sd-concepts-library/omlettehaai | sd-concepts-library | 2023-03-05T22:39:50Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2023-03-05T22:39:45Z | ---
license: mit
---
### omletteHAAI on Stable Diffusion
This is the `<egg-photo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
coreml-community/coreml-RPG | coreml-community | 2023-03-05T22:32:29Z | 0 | 19 | null | [
"coreml",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-01-27T03:39:51Z | ---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br>
- Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
- `original` version is only compatible with CPU & GPU option.<br>
# Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
# RPG:
Source(s): [Hugging Face](https://huggingface.co/Anashel/rpg) - [CivitAI](https://civitai.com/models/1116/rpg)
**Latest Update: Feb 5th, 2023**
- Version 4.0 is live **[available here](https://huggingface.co/Anashel/rpg/tree/main/RPG-V4-Model-Download)**
- New Prompt User Guide for RPG v4 **[Download Now](https://huggingface.co/Anashel/rpg/resolve/main/RPG-V4-Model-Download/RPG-Guide-v4.pdf)**
## Contribute
If you wish to support the prompt research on this project.
- Rate RPG V4 on **[CivitAI](https://civitai.com/models/1116/rpg)**
- Donate (ETH Only): anashel.eth | 0xc4055f3c65D01a48Bc47bE87751794eA9f42E367
## Future Updates
I am in the process of writing a detailed guide with a list of word you can switch easily in the main prompt. Ex: Blood Elf Knight, Female Death Knight Mage, etc... In the meantime, fell free to share your creation on my *[Discord Server](https://discord.gg/7CGDRjDz7P)*
---
## RPG v4 Render Sample






---
**How to reach me**
- Reddit: [u/Anashel](https://www.reddit.com/user/anashel)
- Discord: [RPG V3 Channel](https://discord.gg/rDrhtWZk8u)
----
## RPG v3 Render Sample





## RPG v2 Render Sample
Genereated with RPG V2. [Available here](https://huggingface.co/Anashel/rpg/tree/main/All-Concept-Zip-Format)




----
## OTHER EXAMPLE






 |
Theju/CA2CA3 | Theju | 2023-03-05T22:30:43Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-05T21:52:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: CA2CA3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CA2CA3
This model is a fine-tuned version of [Sjdan/CA_1_2](https://huggingface.co/Sjdan/CA_1_2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
sptrodon/q-Taxi-v3 | sptrodon | 2023-03-05T22:29:02Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-05T21:55:54Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sptrodon/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Theju/ca_3_healthy_1 | Theju | 2023-03-05T22:15:16Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-05T21:36:36Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: ca_3_healthy_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ca_3_healthy_1
This model is a fine-tuned version of [Theju/healthy_1](https://huggingface.co/Theju/healthy_1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Theju/CA_2_INITIAL_1 | Theju | 2023-03-05T22:12:29Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-05T21:16:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: CA_2_INITIAL_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CA_2_INITIAL_1
This model is a fine-tuned version of [Sjdan/CA_1_2](https://huggingface.co/Sjdan/CA_1_2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Theju/CA2CA4 | Theju | 2023-03-05T21:32:18Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-05T20:32:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: CA2CA4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CA2CA4
This model is a fine-tuned version of [Sjdan/CA_1_2](https://huggingface.co/Sjdan/CA_1_2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
nlightcho/stable-diffusion-2-1 | nlightcho | 2023-03-05T21:26:34Z | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-03-05T21:03:13Z | ---
license: openrail++
tags:
- stable-diffusion
- text-to-image
pinned: true
---
# Stable Diffusion v2-1 Model Card
This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available [here](https://github.com/Stability-AI/stablediffusion).
This `stable-diffusion-2-1` model is fine-tuned from [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) (`768-v-ema.ckpt`) with an additional 55k steps on the same dataset (with `punsafe=0.1`), and then fine-tuned for another 155k extra steps with `punsafe=0.98`.
- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `v2-1_768-ema-pruned.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.ckpt).
- Use it with 🧨 [`diffusers`](#examples)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
- **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner.
```bash
pip install diffusers transformers accelerate scipy safetensors
```
Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler):
```python
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
model_id = "stabilityai/stable-diffusion-2-1"
# Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
**Notes**:
- Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance)
- If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed)
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a subset of the large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section).
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic.
**Training Procedure**
Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through the OpenCLIP-ViT/H text-encoder.
- The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512.
We currently provide the following checkpoints:
- `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`.
850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`.
- `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset.
- `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized.
- `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://huggingface.co/runwayml/stable-diffusion-inpainting).
- `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752).
In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 1
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints:

Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 200000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq.
## Citation
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
eoulster/q-Taxi-v3 | eoulster | 2023-03-05T21:25:29Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-05T21:25:27Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="eoulster/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
eoulster/q-FrozenLake-v1-4x4-noSlippery | eoulster | 2023-03-05T21:24:30Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-05T21:24:26Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="eoulster/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BobMcDear/convnext_xxlarge_clip_laion2b_soup_256 | BobMcDear | 2023-03-05T21:16:21Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-03-05T21:04:50Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
BobMcDear/convnext_xxlarge_clip_laion2b_rewind_256 | BobMcDear | 2023-03-05T21:05:40Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-03-05T20:51:32Z | Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
|
research-backup/xlm-roberta-large-trimmed-pt-15000 | research-backup | 2023-03-05T21:02:55Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-05T20:47:03Z | # Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-pt-15000`
This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-pt-15000 |
|:---------------------------|:--------------------|:--------------------------------------------------|
| parameter_size_full | 560,142,482 | 319,267,482 |
| parameter_size_embedding | 256,002,048 | 15,362,048 |
| vocab_size | 250,002 | 15,002 |
| compression_rate_full | 100.0 | 57.0 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| pt | vocabtrimmer/mc4_validation | text | pt | validation | 15000 | 2 | |
cuadron11/bert-finetuned-ner | cuadron11 | 2023-03-05T20:55:39Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-03-05T20:21:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9370860927152318
- name: Recall
type: recall
value: 0.9525412319084483
- name: F1
type: f1
value: 0.944750459021866
- name: Accuracy
type: accuracy
value: 0.9868134455760287
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
- Precision: 0.9371
- Recall: 0.9525
- F1: 0.9448
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0884 | 1.0 | 1756 | 0.0734 | 0.9200 | 0.9366 | 0.9282 | 0.9818 |
| 0.0355 | 2.0 | 3512 | 0.0672 | 0.9311 | 0.9510 | 0.9410 | 0.9862 |
| 0.0178 | 3.0 | 5268 | 0.0614 | 0.9371 | 0.9525 | 0.9448 | 0.9868 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
FrancoisDongier/rl_course_vizdoom_health_gathering_supreme | FrancoisDongier | 2023-03-05T20:55:16Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-05T20:51:11Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.28 +/- 2.83
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r FrancoisDongier/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
waifuwishes/WW_LoRAs | waifuwishes | 2023-03-05T20:55:07Z | 0 | 2 | null | [
"lora",
"anime",
"region:us"
]
| null | 2023-02-12T14:43:37Z | ---
tags:
- lora
- anime
---
# Table of Contents
- [Overview](#overview)
- [Installation](#installation)
- [Usage](#usage)
- [LoRAs](#loras)
- [SocialMedia](#socialmedia)
# Overview
Inspired by amazing work done by [Trauter](https://huggingface.co/YoungMasterFromSect/Trauter_LoRAs) I decided to make a contribution
to society by extending his work and developing new LoRAs.
I'm going to train and test models on anime checkpoints like [WarriorMama777](https://huggingface.co/WarriorMama777/OrangeMixs), [Andite](https://huggingface.co/andite/anything-v4.0),
[Gsdf](https://huggingface.co/gsdf/Counterfeit-V2.5), for that reason alone, I don't know how they will perform on your specific model.
You can find comparision grid in **[model_name]/Previews** folder.
Previews have metadata containing the prompt and settings used to create them, you can access this via "PNG Info" tab in [Automatic1111/WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
Every model is trained with [danbooru](https://danbooru.donmai.us/tags?commit=Search&search%5Bhide_empty%5D=yes&search%5Border%5D=count) tag, using [wd14-tagger](https://github.com/toriato/stable-diffusion-webui-wd14-tagger) with tweaks.
Additionally, every character folder contains a json file with information about [training](https://github.com/bmaltais/kohya_ss) settings used for a specific model.
# Installation
Paste desired model (if you want thumbnail you can also paste preview image) into **\stable-diffusion-webui\models\Lora**
Since LoRAs are now available directly in WebUI, you can use them as presented in the following [guide](https://rentry.org/2chAI_LoRA_Dreambooth_guide_english#usage).
# Usage
I make models with **ww** prefix
some skins may have additional outfits, check lora details for name of the skin
```
ww_[source_name]_[character_name]_[optional_skin]
ww_ov_widowmaker
ww_al_pe_default_skin
```
I wanted to somehow create flexible models. I'm trying to balance my LoRAs to work at weight equal to 1, you may want to customize specific parts like hair type or length, clothes, breasts size, accessories with lesser weight if it's not working for you.
# LoRAs
- [Overwatch](#overwatch)
- [Widowmaker](#widowmaker)
- [Ashe](#ashe)
- [AzurLane](#azurlane)
- [PrinzEugen](#prinzeugen)
# Overwatch
- # Widowmaker
[<img src="https://huggingface.co/waifuwishes/WW_LoRAs/resolve/main/Overwatch/Widowmaker/Previews/ww_ov_widowmaker_v2.png" width="512" height="768">](https://huggingface.co/waifuwishes/WW_LoRAs/resolve/main/Overwatch/Widowmaker/Previews/ww_ov_widowmaker_v2.png)
<details>
<summary>Prompt</summary>
<pre>
ww_ov_widowmaker, (masterpiece:1.2), (best quality), (extremely detailed), highres, illustration, depth of field, dark intense shadows, sharp focus, soft light, (good composition), standing,
1girl, solo, small breasts, pink bodysuit, looking at viewer, serious,
outdoors, night, sky, detailed background
Negative prompt: EasyNegative, extra fingers, fewer fingers, disembodied limb, extra legs, extra arms, bad anatomy, username, artist name, signature
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2475013484, Size: 512x768, Model hash: 0873291ac5, Denoising strength: 0.5, Clip skip: 2, Hires upscale: 1.2, Hires upscaler: Latent
</pre>
</details>
<details>
<summary>Details</summary>
<pre>
Changelog:
v1 - legacy option - requires a large number of tags to function properly
v2 - less overfitted - pruned - only outfit is tagged
</pre>
</details>
- # Ashe
[<img src="https://huggingface.co/waifuwishes/WW_LoRAs/resolve/main/Overwatch/Ashe/Previews/ww_ov_ashe_v2.png" width="512" height="768">](https://huggingface.co/waifuwishes/WW_LoRAs/resolve/main/Overwatch/Ashe/Previews/ww_ov_ashe_v2.png)
<details>
<summary>Prompt</summary>
<pre>
ww_ov_ashe, (masterpiece:1.2), (best quality), (extremely detailed), highres, illustration, depth of field, dark intense shadows, sharp focus, soft light, (good composition), standing,
1girl, solo, bob cut, white shirt, vest, hat, red necktie, shoulder armor, looking at viewer,
outdoors, sunset, detailed background
Negative prompt: EasyNegative, extra fingers,fewer fingers, username, artist name, signature, disembodied limb, extra legs, extra arms, extra fingers, bad anatomy, username, signature
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3036965743, Size: 512x768, Model hash: 0873291ac5, Denoising strength: 0.5, Clip skip: 2, Hires upscale: 1.2, Hires upscaler: Latent
</pre>
</details>
<details>
<summary>Details</summary>
<pre>
Changelog:
v1 - legacy option - requires a large number of tags to function properly
v2 - less overfitted - pruned - only outfit is tagged
</pre>
</details>
# AzurLane
- # PrinzEugen
[<img src="https://huggingface.co/waifuwishes/WW_LoRAs/resolve/main/Azur_Lane/Prinz_Eugen/Previews/ww_al_pe_v1.png" width="512" height="768">](https://huggingface.co/waifuwishes/WW_LoRAs/resolve/main/Azur_Lane/Prinz_Eugen/Previews/ww_al_pe_v1.png)
<details>
<summary>Prompt</summary>
<pre>
ww_al_pe_default_skin, (masterpiece:1.2), (best quality), ultra-detailed, digital painting, good composition, depth of field, sitting, crossed legs,
1girl, solo, medium breasts, machinery, turret, smirk, (arms behind back),
outdoors, rainbow, birds, manjuu \(azur lane\), detailed background
Negative prompt: EasyNegative, extra fingers, fewer fingers, disembodied limb, extra legs, extra arms, bad anatomy, username, artist name, signature, nude, nsfw, bare shoulders
Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 8, Seed: 3697064953, Size: 512x768, Model hash: 6e430eb514, Denoising strength: 0.5, Clip skip: 2, Hires upscale: 1.2, Hires upscaler: Latent (nearest-exact)
</pre>
</details>
<details>
<summary>Details</summary>
<pre>
Available skins:
ww_al_pe_default_skin, ww_al_pe_unfading_smile_skin, ww_al_pe_final_lap_skin, ww_al_pe_cordial_cornflower_skin, ww_al_pe_kindred_evening_spirits_skin, ww_al_pe_profusion_of_flowers_skin, ww_al_pe_wedding_skin, ww_al_pe_nurse_skin
Changelog:
v1 - pruned - only outfit is tagged
</pre>
</details>
# SocialMedia
[Twitter](https://twitter.com/Waifu_Wishes)
[Reddit](https://www.reddit.com/user/waifu_wishes)
[Instagram](https://www.instagram.com/waifuwishes/) |
gokuls/bert_12_layer_model_v2_complete_training | gokuls | 2023-03-05T20:48:35Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-28T11:12:06Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v2_complete_training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v2_complete_training
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8623
- Accuracy: 0.6328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 6.1798 | 0.11 | 10000 | 6.1719 | 0.1485 |
| 6.0527 | 0.22 | 20000 | 6.0469 | 0.1502 |
| 5.6176 | 0.33 | 30000 | 5.5703 | 0.1772 |
| 3.8786 | 0.44 | 40000 | 3.7441 | 0.3851 |
| 3.4104 | 0.55 | 50000 | 3.3105 | 0.4327 |
| 3.1802 | 0.66 | 60000 | 3.0781 | 0.4601 |
| 3.0115 | 0.76 | 70000 | 2.9141 | 0.4804 |
| 2.8893 | 0.87 | 80000 | 2.7930 | 0.4956 |
| 2.7983 | 0.98 | 90000 | 2.6973 | 0.5081 |
| 2.7039 | 1.09 | 100000 | 2.6016 | 0.5215 |
| 2.5658 | 1.2 | 110000 | 2.4551 | 0.5448 |
| 2.4846 | 1.31 | 120000 | 2.3730 | 0.5576 |
| 2.4284 | 1.42 | 130000 | 2.3164 | 0.5663 |
| 2.3723 | 1.53 | 140000 | 2.2734 | 0.5726 |
| 2.3382 | 1.64 | 150000 | 2.2344 | 0.5787 |
| 2.3084 | 1.75 | 160000 | 2.2031 | 0.5829 |
| 2.2773 | 1.86 | 170000 | 2.1758 | 0.5872 |
| 2.2492 | 1.97 | 180000 | 2.1484 | 0.5909 |
| 2.2261 | 2.08 | 190000 | 2.1230 | 0.5943 |
| 2.1961 | 2.18 | 200000 | 2.1016 | 0.5976 |
| 2.1838 | 2.29 | 210000 | 2.0820 | 0.6004 |
| 2.164 | 2.4 | 220000 | 2.0645 | 0.6031 |
| 2.1456 | 2.51 | 230000 | 2.0469 | 0.6052 |
| 2.1308 | 2.62 | 240000 | 2.0293 | 0.6080 |
| 2.1161 | 2.73 | 250000 | 2.0137 | 0.6101 |
| 2.1052 | 2.84 | 260000 | 2.0020 | 0.6120 |
| 2.0856 | 2.95 | 270000 | 1.9902 | 0.6142 |
| 2.0743 | 3.06 | 280000 | 1.9775 | 0.6159 |
| 2.0598 | 3.17 | 290000 | 1.9678 | 0.6171 |
| 2.0492 | 3.28 | 300000 | 1.9561 | 0.6190 |
| 2.0395 | 3.39 | 310000 | 1.9453 | 0.6203 |
| 2.0328 | 3.5 | 320000 | 1.9365 | 0.6217 |
| 2.0204 | 3.6 | 330000 | 1.9287 | 0.6230 |
| 2.0142 | 3.71 | 340000 | 1.9199 | 0.6243 |
| 2.0021 | 3.82 | 350000 | 1.9121 | 0.6257 |
| 2.006 | 3.93 | 360000 | 1.9043 | 0.6264 |
| 1.9917 | 4.04 | 370000 | 1.8984 | 0.6274 |
| 1.9881 | 4.15 | 380000 | 1.8916 | 0.6284 |
| 1.9843 | 4.26 | 390000 | 1.8867 | 0.6291 |
| 1.977 | 4.37 | 400000 | 1.8809 | 0.6301 |
| 1.9697 | 4.48 | 410000 | 1.8770 | 0.6306 |
| 1.9655 | 4.59 | 420000 | 1.8740 | 0.6313 |
| 1.9649 | 4.7 | 430000 | 1.8691 | 0.6320 |
| 1.9622 | 4.81 | 440000 | 1.8662 | 0.6324 |
| 1.9539 | 4.92 | 450000 | 1.8623 | 0.6328 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.10.1
- Tokenizers 0.13.2
|
research-backup/xlm-roberta-large-trimmed-it-45000 | research-backup | 2023-03-05T20:24:51Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-05T20:09:24Z | # Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-it-45000`
This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-it-45000 |
|:---------------------------|:--------------------|:--------------------------------------------------|
| parameter_size_full | 560,142,482 | 350,017,482 |
| parameter_size_embedding | 256,002,048 | 46,082,048 |
| vocab_size | 250,002 | 45,002 |
| compression_rate_full | 100.0 | 62.49 |
| compression_rate_embedding | 100.0 | 18.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | 45000 | 2 | |
Theju/CA_2_NEW_1 | Theju | 2023-03-05T20:23:34Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-05T19:42:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: CA_2_NEW_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CA_2_NEW_1
This model is a fine-tuned version of [Theju/healthy_1](https://huggingface.co/Theju/healthy_1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
castejon777/PPO-LunarLander-v2 | castejon777 | 2023-03-05T20:02:31Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-05T20:02:04Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.78 +/- 16.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
eoulster/ppo-Huggy | eoulster | 2023-03-05T19:54:10Z | 18 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-03-05T19:54:01Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: eoulster/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
edbeeching/gpt-neo-125M-imdb_adapter-lr-5e4 | edbeeching | 2023-03-05T19:48:59Z | 0 | 0 | null | [
"pytorch",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"region:us"
]
| null | 2023-03-05T19:42:21Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: gpt-neo-125M-imdb_adapter-lr-5e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-imdb_adapter-lr-5e4
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ECarbenia/grimoiresigils | ECarbenia | 2023-03-05T19:44:25Z | 32 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-03-05T19:04:04Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
This model was trained with 300 sigils from classic grimoires and a few modern grimoires.
Some of the sources include Heptameron, Verum, Goetia, Ars Almadel, Ars Paulina, Honorius, Hygromanteia, The works of Dr. John Dee, A.E. Waite's Turba Philosophorum, etc.
Veve from various spirits within the tradition of Vodun were included, as well as examples from modern practitioners.
Skews the results towards the style of classic sigils, and often results in somewhat familiar forms. Type in the name of a desired spirit/effect, and run some tests with them.
Results are generally black and white, unlike models which are not trained on this dataset.
Special care has been taken to include multiple traditions, and spirits corresponding to each element, zodiac, direction, tree of life sphere, etc. in roughly equal parts.
Be sure to include the word "sigil" in the prompt. The prompt can be strengthened by including the term "grimoiresigils" as well.
### grimoiresigils Dreambooth model trained by ECarbenia with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
research-backup/xlm-roberta-large-trimmed-es-45000 | research-backup | 2023-03-05T19:31:36Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-05T19:15:59Z | # Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-es-45000`
This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-es-45000 |
|:---------------------------|:--------------------|:--------------------------------------------------|
| parameter_size_full | 560,142,482 | 350,017,482 |
| parameter_size_embedding | 256,002,048 | 46,082,048 |
| vocab_size | 250,002 | 45,002 |
| compression_rate_full | 100.0 | 62.49 |
| compression_rate_embedding | 100.0 | 18.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | 45000 | 2 | |
afaji/fine-tuned-IndoNLI-Basic-with-xlm-roberta-large-LR-3e-05 | afaji | 2023-03-05T19:24:04Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-05T18:41:43Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fine-tuned-IndoNLI-Basic-with-xlm-roberta-large-LR-3e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-IndoNLI-Basic-with-xlm-roberta-large-LR-3e-05
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1016
- Accuracy: 0.3409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1164 | 0.5 | 80 | 1.1892 | 0.2918 |
| 1.1255 | 0.99 | 160 | 1.1077 | 0.3409 |
| 1.1308 | 1.49 | 240 | 1.1054 | 0.3409 |
| 1.119 | 1.98 | 320 | 1.0943 | 0.3673 |
| 1.1218 | 2.48 | 400 | 1.1094 | 0.3673 |
| 1.1216 | 2.98 | 480 | 1.1402 | 0.2918 |
| 1.1149 | 3.48 | 560 | 1.1016 | 0.3409 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
research-backup/xlm-roberta-large-trimmed-es-15000 | research-backup | 2023-03-05T18:57:47Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-05T18:43:45Z | # Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-es-15000`
This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-es-15000 |
|:---------------------------|:--------------------|:--------------------------------------------------|
| parameter_size_full | 560,142,482 | 319,267,482 |
| parameter_size_embedding | 256,002,048 | 15,362,048 |
| vocab_size | 250,002 | 15,002 |
| compression_rate_full | 100.0 | 57.0 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | 15000 | 2 | |
gokuls/hBERTv1_data_aug_stsb | gokuls | 2023-03-05T18:55:30Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-05T16:34:55Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: hBERTv1_data_aug_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.4294216093279403
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_data_aug_stsb
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1](https://huggingface.co/gokuls/bert_12_layer_model_v1) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1580
- Pearson: 0.4471
- Spearmanr: 0.4294
- Combined Score: 0.4383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 0.5955 | 1.0 | 1259 | 2.1996 | 0.4857 | 0.4640 | 0.4748 |
| 0.1017 | 2.0 | 2518 | 2.1580 | 0.4471 | 0.4294 | 0.4383 |
| 0.06 | 3.0 | 3777 | 2.5480 | 0.4052 | 0.3733 | 0.3892 |
| 0.0454 | 4.0 | 5036 | 2.1594 | 0.4500 | 0.4193 | 0.4347 |
| 0.038 | 5.0 | 6295 | 2.6866 | 0.4071 | 0.3658 | 0.3865 |
| 0.0318 | 6.0 | 7554 | 2.8519 | 0.3891 | 0.3435 | 0.3663 |
| 0.0283 | 7.0 | 8813 | 2.6783 | 0.3836 | 0.3464 | 0.3650 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.10.1
- Tokenizers 0.13.2
|
darthrevenge/Reinforce-Carpole-1 | darthrevenge | 2023-03-05T18:51:07Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-05T18:50:58Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Carpole-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Theju/M05_1 | Theju | 2023-03-05T18:37:43Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-05T16:51:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: M05_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M05_1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
dyhpoon/ppo-LunarLander-v2 | dyhpoon | 2023-03-05T18:28:54Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-05T18:28:21Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 228.69 +/- 30.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
eoulster/ppo-LunarLander-v2 | eoulster | 2023-03-05T18:28:34Z | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-05T18:28:00Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.49 +/- 21.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
research-backup/xlm-roberta-large-trimmed-de-30000 | research-backup | 2023-03-05T18:11:41Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-05T17:56:42Z | # Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-de-30000`
This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-de-30000 |
|:---------------------------|:--------------------|:--------------------------------------------------|
| parameter_size_full | 560,142,482 | 334,642,482 |
| parameter_size_embedding | 256,002,048 | 30,722,048 |
| vocab_size | 250,002 | 30,002 |
| compression_rate_full | 100.0 | 59.74 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| de | vocabtrimmer/mc4_validation | text | de | validation | 30000 | 2 | |
neatbullshit/q-Taxi-v3 | neatbullshit | 2023-03-05T17:52:47Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-17T08:37:43Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="neatbullshit/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
neatbullshit/q-FrozenLake-v1-4x4-noSlippery | neatbullshit | 2023-03-05T17:51:04Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-09T01:07:21Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="neatbullshit/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ammr/ppo-SnowballTarget | ammr | 2023-03-05T17:44:00Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-03-05T17:00:25Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: ammr/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dinesht/t5-small-finetuned-wikisql | dinesht | 2023-03-05T17:39:15Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wikisql",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-03-05T04:50:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikisql
model-index:
- name: t5-small-finetuned-wikisql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikisql dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1266
- Rouge2 Precision: 0.817
- Rouge2 Recall: 0.7258
- Rouge2 Fmeasure: 0.7616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.2018 | 1.0 | 4049 | 0.1603 | 0.7905 | 0.7009 | 0.736 |
| 0.1677 | 2.0 | 8098 | 0.1414 | 0.8061 | 0.7147 | 0.7506 |
| 0.156 | 3.0 | 12147 | 0.1314 | 0.8127 | 0.722 | 0.7576 |
| 0.1469 | 4.0 | 16196 | 0.1280 | 0.8152 | 0.7238 | 0.7597 |
| 0.1433 | 5.0 | 20245 | 0.1266 | 0.817 | 0.7258 | 0.7616 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
paicup09/a2c-PandaReachDense-v2 | paicup09 | 2023-03-05T17:31:05Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-29T23:40:05Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.37 +/- 0.45
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits