modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 18:27:59
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 520
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 18:27:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lsaulier/poca-SoccerTwo | lsaulier | 2023-02-10T07:59:30Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-10T07:59:21Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: lsaulier/poca-SoccerTwo
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Anorak/layoutlm-funsd | Anorak | 2023-02-10T07:57:32Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-10T07:51:07Z | ---
tags:
- generated_from_trainer
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6857
- Answer: {'precision': 0.7176981541802389, 'recall': 0.8170580964153276, 'f1': 0.7641618497109827, 'number': 809}
- Header: {'precision': 0.28368794326241137, 'recall': 0.33613445378151263, 'f1': 0.3076923076923077, 'number': 119}
- Question: {'precision': 0.7773820124666073, 'recall': 0.819718309859155, 'f1': 0.7979890310786105, 'number': 1065}
- Overall Precision: 0.7204
- Overall Recall: 0.7898
- Overall F1: 0.7535
- Overall Accuracy: 0.8139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.8064 | 1.0 | 10 | 1.6080 | {'precision': 0.020618556701030927, 'recall': 0.012360939431396786, 'f1': 0.01545595054095827, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.2702127659574468, 'recall': 0.11924882629107982, 'f1': 0.16547231270358306, 'number': 1065} | 0.1435 | 0.0687 | 0.0929 | 0.3378 |
| 1.4826 | 2.0 | 20 | 1.2520 | {'precision': 0.20166320166320167, 'recall': 0.23980222496909764, 'f1': 0.21908526256352345, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4309507286606523, 'recall': 0.5830985915492958, 'f1': 0.49561053471667993, 'number': 1065} | 0.3392 | 0.4089 | 0.3708 | 0.5993 |
| 1.1438 | 3.0 | 30 | 0.9584 | {'precision': 0.463519313304721, 'recall': 0.5339925834363412, 'f1': 0.49626651349798967, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.6199664429530202, 'recall': 0.6938967136150235, 'f1': 0.6548515728843598, 'number': 1065} | 0.5492 | 0.5876 | 0.5678 | 0.6897 |
| 0.8546 | 4.0 | 40 | 0.7900 | {'precision': 0.5885714285714285, 'recall': 0.7639060568603214, 'f1': 0.6648735879505111, 'number': 809} | {'precision': 0.06666666666666667, 'recall': 0.025210084033613446, 'f1': 0.036585365853658534, 'number': 119} | {'precision': 0.6505823627287853, 'recall': 0.7342723004694836, 'f1': 0.6898985443317159, 'number': 1065} | 0.6108 | 0.7040 | 0.6541 | 0.7537 |
| 0.6765 | 5.0 | 50 | 0.7144 | {'precision': 0.6514047866805411, 'recall': 0.7737948084054388, 'f1': 0.7073446327683616, 'number': 809} | {'precision': 0.09230769230769231, 'recall': 0.05042016806722689, 'f1': 0.06521739130434782, 'number': 119} | {'precision': 0.7019810508182601, 'recall': 0.7652582159624414, 'f1': 0.7322551662174304, 'number': 1065} | 0.6616 | 0.7260 | 0.6923 | 0.7773 |
| 0.5613 | 6.0 | 60 | 0.6796 | {'precision': 0.6635514018691588, 'recall': 0.7898640296662547, 'f1': 0.7212189616252822, 'number': 809} | {'precision': 0.15306122448979592, 'recall': 0.12605042016806722, 'f1': 0.1382488479262673, 'number': 119} | {'precision': 0.7274320771253286, 'recall': 0.7793427230046949, 'f1': 0.7524932003626473, 'number': 1065} | 0.6739 | 0.7446 | 0.7075 | 0.7927 |
| 0.4872 | 7.0 | 70 | 0.6554 | {'precision': 0.6592517694641051, 'recall': 0.8059332509270705, 'f1': 0.7252502780867631, 'number': 809} | {'precision': 0.22549019607843138, 'recall': 0.19327731092436976, 'f1': 0.20814479638009048, 'number': 119} | {'precision': 0.7383177570093458, 'recall': 0.815962441314554, 'f1': 0.775200713648528, 'number': 1065} | 0.6808 | 0.7747 | 0.7247 | 0.7997 |
| 0.4334 | 8.0 | 80 | 0.6526 | {'precision': 0.6941176470588235, 'recall': 0.8022249690976514, 'f1': 0.7442660550458714, 'number': 809} | {'precision': 0.24545454545454545, 'recall': 0.226890756302521, 'f1': 0.23580786026200873, 'number': 119} | {'precision': 0.7493627867459643, 'recall': 0.828169014084507, 'f1': 0.7867975022301517, 'number': 1065} | 0.7012 | 0.7817 | 0.7393 | 0.8035 |
| 0.3941 | 9.0 | 90 | 0.6694 | {'precision': 0.7048997772828508, 'recall': 0.7824474660074165, 'f1': 0.741652021089631, 'number': 809} | {'precision': 0.22099447513812154, 'recall': 0.33613445378151263, 'f1': 0.26666666666666666, 'number': 119} | {'precision': 0.7218984179850125, 'recall': 0.8140845070422535, 'f1': 0.76522506619594, 'number': 1065} | 0.6754 | 0.7727 | 0.7208 | 0.8007 |
| 0.3556 | 10.0 | 100 | 0.6607 | {'precision': 0.694006309148265, 'recall': 0.8158220024721878, 'f1': 0.75, 'number': 809} | {'precision': 0.25, 'recall': 0.2773109243697479, 'f1': 0.26294820717131473, 'number': 119} | {'precision': 0.7846153846153846, 'recall': 0.8140845070422535, 'f1': 0.7990783410138248, 'number': 1065} | 0.7130 | 0.7827 | 0.7462 | 0.8068 |
| 0.3245 | 11.0 | 110 | 0.6728 | {'precision': 0.6990595611285266, 'recall': 0.826946847960445, 'f1': 0.7576443941109853, 'number': 809} | {'precision': 0.2892561983471074, 'recall': 0.29411764705882354, 'f1': 0.2916666666666667, 'number': 119} | {'precision': 0.7817703768624014, 'recall': 0.8375586854460094, 'f1': 0.8087035358114233, 'number': 1065} | 0.7192 | 0.8008 | 0.7578 | 0.8089 |
| 0.3113 | 12.0 | 120 | 0.6799 | {'precision': 0.71875, 'recall': 0.796044499381953, 'f1': 0.755425219941349, 'number': 809} | {'precision': 0.25903614457831325, 'recall': 0.36134453781512604, 'f1': 0.3017543859649123, 'number': 119} | {'precision': 0.775330396475771, 'recall': 0.8262910798122066, 'f1': 0.8, 'number': 1065} | 0.7132 | 0.7863 | 0.7480 | 0.8106 |
| 0.2921 | 13.0 | 130 | 0.6836 | {'precision': 0.7070063694267515, 'recall': 0.823238566131026, 'f1': 0.7607081667618503, 'number': 809} | {'precision': 0.32432432432432434, 'recall': 0.3025210084033613, 'f1': 0.31304347826086953, 'number': 119} | {'precision': 0.7976513098464318, 'recall': 0.8291079812206573, 'f1': 0.8130755064456722, 'number': 1065} | 0.7338 | 0.7953 | 0.7633 | 0.8122 |
| 0.2841 | 14.0 | 140 | 0.6848 | {'precision': 0.7150537634408602, 'recall': 0.8220024721878862, 'f1': 0.7648073605520415, 'number': 809} | {'precision': 0.26666666666666666, 'recall': 0.33613445378151263, 'f1': 0.2973977695167286, 'number': 119} | {'precision': 0.7841726618705036, 'recall': 0.8187793427230047, 'f1': 0.8011024345429489, 'number': 1065} | 0.7194 | 0.7913 | 0.7536 | 0.8127 |
| 0.2793 | 15.0 | 150 | 0.6857 | {'precision': 0.7176981541802389, 'recall': 0.8170580964153276, 'f1': 0.7641618497109827, 'number': 809} | {'precision': 0.28368794326241137, 'recall': 0.33613445378151263, 'f1': 0.3076923076923077, 'number': 119} | {'precision': 0.7773820124666073, 'recall': 0.819718309859155, 'f1': 0.7979890310786105, 'number': 1065} | 0.7204 | 0.7898 | 0.7535 | 0.8139 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.8.0+cu101
- Tokenizers 0.13.2
|
cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3 | cleanrl | 2023-02-10T07:56:42Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"ChopperCommand-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T07:56:36Z | ---
tags:
- ChopperCommand-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: ChopperCommand-v5
type: ChopperCommand-v5
metrics:
- type: mean_reward
value: 52100.00 +/- 43721.02
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **ChopperCommand-v5**
This is a trained model of a PPO agent playing ChopperCommand-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id ChopperCommand-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id ChopperCommand-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'ChopperCommand-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
gyeoldere/DeBERTa-finetuned-SNLI2 | gyeoldere | 2023-02-10T07:51:01Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"deberta",
"generated_from_trainer",
"dataset:snli",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2023-02-09T04:37:00Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- snli
model-index:
- name: DeBERTa-finetuned-SNLI2
results: []
metrics:
- accuracy
library_name: transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa-finetuned-SNLI2
This model is a fine-tuned version of [gyeoldere/test_trainer](https://huggingface.co/gyeoldere/test_trainer) on the snli dataset.
Test_trainer model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the snli dataset.
This model achieves the following results on the evaluation set:
- NLI accuracy : 0.86
- MLM accuracy : 0.68
## Model description
This model fine-tuned to perform 2 tasks simultaneously; NLI task and MLM task.
Output vector of DeBERTa processed through two different fc layer to predict.
I used layer structure introduced in BERT paper, which is implemented on huggingface transformers; DebertaForTokenClassification and DebertaForMaskedLM.
[https://huggingface.co/docs/transformers/index]
BinaryCrossEntrophyLoss are used for each class, and two losses are added to obtain final loss
final_loss = MLM_loss + NLI_loss
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2 |
lamducanhndgv/sst2-custom-setfit-model | lamducanhndgv | 2023-02-10T07:48:13Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-02-10T07:47:54Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# sst2-custom-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("sst2-custom-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
fathyshalab/massive_calendar-roberta-large-v1-3-93 | fathyshalab | 2023-02-10T07:32:12Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-02-10T07:31:47Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# fathyshalab/massive_calendar-roberta-large-v1-3-93
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_calendar-roberta-large-v1-3-93")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2 | cleanrl | 2023-02-10T07:31:38Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Centipede-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T07:31:34Z | ---
tags:
- Centipede-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Centipede-v5
type: Centipede-v5
metrics:
- type: mean_reward
value: 9061.90 +/- 4899.48
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Centipede-v5**
This is a trained model of a PPO agent playing Centipede-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Centipede-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Centipede-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Centipede-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
jeremy8767/sd-class-butterflies-32 | jeremy8767 | 2023-02-10T07:24:56Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"region:us"
]
| unconditional-image-generation | 2023-02-10T07:24:38Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ๐ฆ.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jeremy8767/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
fathyshalab/massive_transport-roberta-large-v1-3-3 | fathyshalab | 2023-02-10T07:23:48Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-02-10T07:23:24Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# fathyshalab/massive_transport-roberta-large-v1-3-3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_transport-roberta-large-v1-3-3")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
swl-models/miju-v2.1 | swl-models | 2023-02-10T07:23:04Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-10T07:10:12Z | ---
license: creativeml-openrail-m
---
|
aichina/ttz-470 | aichina | 2023-02-10T07:19:52Z | 8 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-10T07:18:28Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: ttz
---
### ttz_470 Dreambooth model trained by aichina with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
ttz (use that on your prompt)

|
nolanaatama/f222 | nolanaatama | 2023-02-10T07:19:12Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-10T06:30:39Z | ---
license: creativeml-openrail-m
---
|
fathyshalab/massive_social-roberta-large-v1-3-7 | fathyshalab | 2023-02-10T07:15:04Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-02-10T07:14:44Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# fathyshalab/massive_social-roberta-large-v1-3-7
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_social-roberta-large-v1-3-7")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
aisingapore/unsupervised-feature-decomposition | aisingapore | 2023-02-10T07:08:47Z | 0 | 0 | null | [
"ufd",
"text-classification",
"undersupervised-feature-decomposition",
"en",
"de",
"fr",
"ja",
"license:gpl-3.0",
"region:us"
]
| text-classification | 2023-02-08T09:46:12Z | ---
license: gpl-3.0
language:
- en
- de
- fr
- ja
tags:
- ufd
- text-classification
- undersupervised-feature-decomposition
inference: false
model-index:
- name: undersupervised-feature-decomposition
results:
- task:
type: text-classification
name: UFD
dataset:
name: lijuntaopku/UFD/tree/main/data (Github)
type: https://github.com/lijuntaopku/UFD/tree/main/data
metrics:
- name: Avg Acc (German) on development set
type: accuracy
value: 87.85%
- name: Avg Acc (French) on development set
type: accuracy
value: 87.45%
- name: Avg Acc (Japanese) on development set
type: accuracy
value: 83.725%
- task:
type: text-classification
name: UFD
metrics:
- name: Avg Acc (German) reported by authors in paper on development set
type: accuracy
value: 84.00%
- name: Avg Acc (French) reported by authors in paper on development set
type: accuracy
value: 88.40%
- name: Avg Acc (Japanese) reported by authors in paper on development set
type: accuracy
value: 85.00%
---
# Cross Lingual Cross Domain
You can **try out the model** at [SGNLP](https://sgnlp.aisingapore.net/cross-lingual-cross-domain).<br />
If you want to find out more information, please contact us at [SGNLP-AISingapore]([email protected]).
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Model Parameters](#model-parameters)
- [License](#license)
## Model Details
**Model Name:** Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language
- **Description:** It is an implementation of Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model paper.
- **Paper:** Unsupervised domain adaptation of a pretrained cross-lingual language model. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, Nov, 2020 (pp. 3672-3678).
- **Author(s):** Li, J., He, R., Ye, H., Ng, H. T., Bing, L., & Yan, R. (2020).
- **URL:** https://www.ijcai.org/Proceedings/2020/508
# How to Get Started With the Model
## Install Python package
SGnlp is an initiative by AI Singapore's NLP Hub. They aim to bridge the gap between research and industry, promote translational research, and encourage adoption of NLP techniques in the industry. <br><br>
Various NLP models, other than cross lingual cross domain are available in the python package. You can try them out at [SGNLP-Demo](https://sgnlp.aisingapore.net/) | [SGNLP-Github](https://github.com/aisingapore/sgnlp).
```python
pip install sgnlp
```
## Examples
For more full code guide, please refer to this [documentation](https://sgnlp.aisingapore.net/docs/model/ufd.html). <br> Alternatively, you can also try out the [demo](https://sgnlp.aisingapore.net/cross-lingual-cross-domain) for Cross Lingual Cross Domain.
Example of Undersupervised Feature Decomposition (UFD) model (German language):
```python
from sgnlp.models.ufd import UFDModelBuilder, UFDPreprocessor
# Instantiate model builder and preprocessor
model_builder = UFDModelBuilder(
source_domains=['books'],
target_languages=['de'],
target_domains=['dvd'])
preprocessor = UFDPreprocessor()
# Build pretrained model groups
model_groups = model_builder.build_model_group()
# Model predict ('books_de_dvd' model example)
instance = """Wolverine is BACK Der Film ist im Grunde wie alle Teile der X-Men fรผr Comic-Fans auf jeden Fall ein muss.
Hugh Jackman spielt seine Rolle wie immer so gut was ich von den ein oder anderen Darsteller leider nicht
sagen kann. Story und Action sind aber genug Grรผnde um sich die Blu-ray zu kaufen."""
instance_features = preprocessor([instance])
output = model_groups['books_de_dvd'](**instance_features)
```
# Training
The training datasets can be retrieved from the following author's repository([github](https://github.com/lijuntaopku/UFD/tree/main/data)).
#### Training Results - For UFD
- **Training Time: (Unsupervised training)** ~3 hours for 30 epochs on a single V100 GPU
- **Training Time: (Supervised training)** ~3 hours for 60 epochs on a single V100 GPU
# Model Parameters
- **Model Weights:** [refer to documentation for details](https://sgnlp.aisingapore.net/docs/model/ufd.html)
- **Model Config:** [refer to documentation for details](https://sgnlp.aisingapore.net/docs/model/ufd.html)
- **Model Inputs:** Raw text.
- **Model Outputs:** Array of logits with the size of number of classes.
- **Model Size:** XLM-Roberta: ~2.2GB, Adaptor Domain: ~8.0MB, Adaptor Global: ~8.0MB, Feature Mapper: ~8.0MB, Classifier: ~9.1KB.
- **Model Inference Info:** ~2 sec on Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz.
- **Usage Scenarios:** Sentiment analysis for eCommerce with operations across multiple countries.
# License
- **For non-commercial use:** GNU GPLv3.
- **For commercial use:** please contact us [SGNLP-AISingapore]([email protected]) |
atorre/poca-SoccerTwos-30M | atorre | 2023-02-10T07:05:47Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-10T07:05:37Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: atorre/poca-SoccerTwos-30M
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Okyx/NERTESTINGCAROLINE2 | Okyx | 2023-02-10T06:59:42Z | 58 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-02-10T06:59:18Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: NERTESTINGCAROLINE2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# NERTESTINGCAROLINE2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0055
- Validation Loss: 0.0050
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10395, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0833 | 0.0156 | 0 |
| 0.0100 | 0.0060 | 1 |
| 0.0055 | 0.0050 | 2 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1 | cleanrl | 2023-02-10T06:58:16Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Breakout-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-05T22:59:48Z | ---
tags:
- Breakout-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Breakout-v5
type: Breakout-v5
metrics:
- type: mean_reward
value: 639.10 +/- 221.36
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Breakout-v5**
This is a trained model of a PPO agent playing Breakout-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Breakout-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Breakout-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Breakout-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3 | cleanrl | 2023-02-10T06:57:43Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Breakout-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T06:57:39Z | ---
tags:
- Breakout-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Breakout-v5
type: Breakout-v5
metrics:
- type: mean_reward
value: 818.30 +/- 130.50
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Breakout-v5**
This is a trained model of a PPO agent playing Breakout-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Breakout-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Breakout-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Breakout-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1 | cleanrl | 2023-02-10T06:57:07Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Centipede-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-05T22:59:20Z | ---
tags:
- Centipede-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Centipede-v5
type: Centipede-v5
metrics:
- type: mean_reward
value: 7550.20 +/- 3036.50
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Centipede-v5**
This is a trained model of a PPO agent playing Centipede-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Centipede-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Centipede-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Centipede-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
thanat/bert-finetuned-squad | thanat | 2023-02-10T06:53:36Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-09T23:30:33Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: thanat/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# thanat/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the [squad](https://huggingface.co/datasets/squad) dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5695
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2755 | 0 |
| 0.7832 | 1 |
| 0.5695 | 2 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2 | cleanrl | 2023-02-10T06:52:36Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Boxing-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T06:52:32Z | ---
tags:
- Boxing-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Boxing-v5
type: Boxing-v5
metrics:
- type: mean_reward
value: 99.80 +/- 0.60
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Boxing-v5**
This is a trained model of a PPO agent playing Boxing-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Boxing-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Boxing-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Boxing-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3 | cleanrl | 2023-02-10T06:48:30Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Boxing-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T06:48:24Z | ---
tags:
- Boxing-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Boxing-v5
type: Boxing-v5
metrics:
- type: mean_reward
value: 100.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Boxing-v5**
This is a trained model of a PPO agent playing Boxing-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Boxing-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Boxing-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Boxing-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/BeamRider-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3 | cleanrl | 2023-02-10T06:48:07Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"BeamRider-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T06:48:03Z | ---
tags:
- BeamRider-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BeamRider-v5
type: BeamRider-v5
metrics:
- type: mean_reward
value: 37196.80 +/- 14290.72
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **BeamRider-v5**
This is a trained model of a PPO agent playing BeamRider-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id BeamRider-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id BeamRider-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'BeamRider-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2 | cleanrl | 2023-02-10T06:45:12Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Bowling-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T06:45:08Z | ---
tags:
- Bowling-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Bowling-v5
type: Bowling-v5
metrics:
- type: mean_reward
value: 49.00 +/- 5.10
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Bowling-v5**
This is a trained model of a PPO agent playing Bowling-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Bowling-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Bowling-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Bowling-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3 | cleanrl | 2023-02-10T06:42:28Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Berzerk-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T06:42:24Z | ---
tags:
- Berzerk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Berzerk-v5
type: Berzerk-v5
metrics:
- type: mean_reward
value: 4940.00 +/- 2761.43
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Berzerk-v5**
This is a trained model of a PPO agent playing Berzerk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Berzerk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Berzerk-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2 | cleanrl | 2023-02-10T06:42:04Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Berzerk-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T06:41:58Z | ---
tags:
- Berzerk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Berzerk-v5
type: Berzerk-v5
metrics:
- type: mean_reward
value: 5506.00 +/- 2309.32
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Berzerk-v5**
This is a trained model of a PPO agent playing Berzerk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Berzerk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Berzerk-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3 | cleanrl | 2023-02-10T06:41:51Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Bowling-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T06:41:46Z | ---
tags:
- Bowling-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Bowling-v5
type: Bowling-v5
metrics:
- type: mean_reward
value: 21.20 +/- 4.31
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Bowling-v5**
This is a trained model of a PPO agent playing Bowling-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Bowling-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Bowling-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Bowling-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/BattleZone-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2 | cleanrl | 2023-02-10T06:38:06Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"BattleZone-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T06:38:03Z | ---
tags:
- BattleZone-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BattleZone-v5
type: BattleZone-v5
metrics:
- type: mean_reward
value: 70200.00 +/- 16803.57
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **BattleZone-v5**
This is a trained model of a PPO agent playing BattleZone-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id BattleZone-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id BattleZone-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'BattleZone-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1 | cleanrl | 2023-02-10T06:36:24Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Berzerk-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-05T22:58:13Z | ---
tags:
- Berzerk-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Berzerk-v5
type: Berzerk-v5
metrics:
- type: mean_reward
value: 1138.00 +/- 261.49
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Berzerk-v5**
This is a trained model of a PPO agent playing Berzerk-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Berzerk-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Berzerk-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
padmajabfrl/Ethnicity-Classification | padmajabfrl | 2023-02-10T06:25:36Z | 23 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-01T05:13:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Ethnicity-Classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Ethnicity-Classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0358
- Accuracy: 0.9951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0569 | 1.0 | 5305 | 0.0597 | 0.9884 |
| 0.0324 | 2.0 | 10610 | 0.0418 | 0.9924 |
| 0.0151 | 3.0 | 15915 | 0.0359 | 0.9941 |
| 0.0037 | 4.0 | 21220 | 0.0366 | 0.9946 |
| 0.0044 | 5.0 | 26525 | 0.0358 | 0.9951 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.12.0
- Datasets 2.9.0
- Tokenizers 0.10.3
|
pfunk/Pong-v4-DQPN_p500_e0.10-seed1 | pfunk | 2023-02-10T06:08:51Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pong-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T06:08:28Z | ---
tags:
- Pong-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-v4
type: Pong-v4
metrics:
- type: mean_reward
value: 3.90 +/- 7.15
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p500_e0.10.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p500_e0.10]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p500_e0.10 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_e0.10-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_e0.10-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_e0.10-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p500_e0.10 --start-policy-f 500000 --end-policy-f 1000 --evaluation-fraction 0.10 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.1,
'exp_name': 'DQPN_p500_e0.10',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 500000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
juanmi1234/Reinforce-PixelCopter | juanmi1234 | 2023-02-10T05:52:49Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T03:37:37Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 55.90 +/- 40.18
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
KoboldAI/OPT-2.7B-Nerybus-Mix | KoboldAI | 2023-02-10T05:38:20Z | 1,761 | 11 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-02-09T10:45:38Z | ---
license: other
language:
- en
inference: false
---
# OPT-2.7B-Nerybus-Mix
This is an experimental model containing a ***parameter-wise 50/50 blend (weighted average)*** of the weights of *NerysV2-2.7B* and *ErebusV1-2.7B*
Preliminary testing produces pretty coherent outputs, it appears to retain the NSFWness of Erebus but with a Nerys-esque twist in terms of prose.
# License
The two models used for this blend, *NerysV2-2.7B* and *ErebusV1-2.7B* are made by **Mr. Seeker**.
- https://huggingface.co/KoboldAI/OPT-2.7B-Erebus
- https://huggingface.co/KoboldAI/OPT-2.7B-Nerys-v2
The base OPT-2.7B model is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
# Evaluation Results
As the original datasets used for the source models are not publically available, I use my own datasets for this evaluation, which may not provide accurate comparison.
Eval parameters: 32000 characters extracted from the middle of the corpus, tested in blocks of 1024 tokens each, same dataset used for each test batch.
```
Literotica Dataset Eval (Randomly selected stories)
{'eval_loss': 2.571258306503296, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'}
{'eval_loss': 2.5491442680358887, 'name': 'KoboldAI_OPT-2.7B-Erebus'}
{'eval_loss': 2.6158597469329834, 'name': 'KoboldAI_OPT-2.7B-Nerys'}
{'eval_loss': 2.614469051361084, 'name': 'facebook_opt-2.7b'}
{'eval_loss': 2.4960227012634277, 'name': '(Unreleased 2.7B ModronAI Model)'}
ASSTR Dataset Eval (Randomly selected stories)
{'eval_loss': 2.664412498474121, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'}
{'eval_loss': 2.6451029777526855, 'name': 'KoboldAI_OPT-2.7B-Erebus'}
{'eval_loss': 2.7259647846221924, 'name': 'KoboldAI_OPT-2.7B-Nerys'}
{'eval_loss': 2.6675195693969727, 'name': 'facebook_opt-2.7b'}
{'eval_loss': 2.962111473083496, 'name': '(Unreleased 2.7B ModronAI Model)'}
Sexstories Dataset Eval (Random highly rated stories)
{'eval_loss': 2.2352423667907715, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'}
{'eval_loss': 2.194378137588501, 'name': 'KoboldAI_OPT-2.7B-Erebus'}
{'eval_loss': 2.307469129562378, 'name': 'KoboldAI_OPT-2.7B-Nerys'}
{'eval_loss': 2.293961763381958, 'name': 'facebook_opt-2.7b'}
{'eval_loss': 2.0103421211242676, 'name': '(Unreleased 2.7B ModronAI Model)'}
Harry Potter Dataset Eval (Canon books)
{'eval_loss': 2.473742961883545, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'}
{'eval_loss': 2.480600357055664, 'name': 'KoboldAI_OPT-2.7B-Erebus'}
{'eval_loss': 2.506237506866455, 'name': 'KoboldAI_OPT-2.7B-Nerys'}
{'eval_loss': 2.5074169635772705, 'name': 'facebook_opt-2.7b'}
{'eval_loss': 2.273703098297119, 'name': '(Unreleased 2.7B ModronAI Model)'}
Star Wars Dataset Eval (Rogue One Novel)
{'eval_loss': 2.5031676292419434, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'}
{'eval_loss': 2.5239150524139404, 'name': 'KoboldAI_OPT-2.7B-Erebus'}
{'eval_loss': 2.526801586151123, 'name': 'KoboldAI_OPT-2.7B-Nerys'}
{'eval_loss': 2.473283529281616, 'name': 'facebook_opt-2.7b'}
{'eval_loss': 2.955465793609619, 'name': '(Unreleased 2.7B ModronAI Model)'}
```
It is recommend to use this model with the KoboldAI software. All feedback and comments can be directed to Concedo on the KoboldAI discord.
|
cleanrl/Asterix-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1 | cleanrl | 2023-02-10T05:36:55Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Asterix-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T05:36:50Z | ---
tags:
- Asterix-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Asterix-v5
type: Asterix-v5
metrics:
- type: mean_reward
value: 264600.00 +/- 34685.59
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Asterix-v5**
This is a trained model of a PPO agent playing Asterix-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Asterix-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Asterix-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Asterix-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Asterix-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Asterix-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Asterix-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Atlantis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1 | cleanrl | 2023-02-10T05:36:15Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Atlantis-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-05T23:10:52Z | ---
tags:
- Atlantis-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Atlantis-v5
type: Atlantis-v5
metrics:
- type: mean_reward
value: 981050.00 +/- 53973.07
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Atlantis-v5**
This is a trained model of a PPO agent playing Atlantis-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Atlantis-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Atlantis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Atlantis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Atlantis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Atlantis-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Atlantis-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Atlantis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3 | cleanrl | 2023-02-10T05:35:27Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Atlantis-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T05:35:21Z | ---
tags:
- Atlantis-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Atlantis-v5
type: Atlantis-v5
metrics:
- type: mean_reward
value: 940680.00 +/- 15926.32
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Atlantis-v5**
This is a trained model of a PPO agent playing Atlantis-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Atlantis-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Atlantis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Atlantis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Atlantis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Atlantis-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Atlantis-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Assault-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1 | cleanrl | 2023-02-10T05:27:59Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Assault-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-05T22:54:57Z | ---
tags:
- Assault-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Assault-v5
type: Assault-v5
metrics:
- type: mean_reward
value: 25571.30 +/- 9973.68
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Assault-v5**
This is a trained model of a PPO agent playing Assault-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Assault-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Assault-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Assault-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Assault-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Assault-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Assault-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Assault-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2 | cleanrl | 2023-02-10T05:27:08Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Assault-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T05:27:04Z | ---
tags:
- Assault-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Assault-v5
type: Assault-v5
metrics:
- type: mean_reward
value: 20280.10 +/- 7934.76
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Assault-v5**
This is a trained model of a PPO agent playing Assault-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Assault-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Assault-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Assault-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Assault-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Assault-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Assault-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Alien-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2 | cleanrl | 2023-02-10T05:23:59Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Alien-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T05:23:52Z | ---
tags:
- Alien-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Alien-v5
type: Alien-v5
metrics:
- type: mean_reward
value: 4749.00 +/- 1891.72
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Alien-v5**
This is a trained model of a PPO agent playing Alien-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Alien-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Alien-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Alien-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Alien-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Alien-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Alien-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Amidar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3 | cleanrl | 2023-02-10T05:23:55Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Amidar-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T05:23:49Z | ---
tags:
- Amidar-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Amidar-v5
type: Amidar-v5
metrics:
- type: mean_reward
value: 1734.50 +/- 526.81
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Amidar-v5**
This is a trained model of a PPO agent playing Amidar-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Amidar-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Amidar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Amidar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Amidar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Amidar-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Amidar-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Amidar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2 | cleanrl | 2023-02-10T05:23:52Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Amidar-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T05:23:48Z | ---
tags:
- Amidar-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Amidar-v5
type: Amidar-v5
metrics:
- type: mean_reward
value: 1320.20 +/- 244.04
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Amidar-v5**
This is a trained model of a PPO agent playing Amidar-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Amidar-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Amidar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Amidar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Amidar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Amidar-v5 --seed 2
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Amidar-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 2,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/BankHeist-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1 | cleanrl | 2023-02-10T05:23:42Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"BankHeist-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-05T22:57:54Z | ---
tags:
- BankHeist-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BankHeist-v5
type: BankHeist-v5
metrics:
- type: mean_reward
value: 452.00 +/- 65.54
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **BankHeist-v5**
This is a trained model of a PPO agent playing BankHeist-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id BankHeist-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BankHeist-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/BankHeist-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BankHeist-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id BankHeist-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'BankHeist-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Alien-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3 | cleanrl | 2023-02-10T05:21:42Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Alien-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T05:21:34Z | ---
tags:
- Alien-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Alien-v5
type: Alien-v5
metrics:
- type: mean_reward
value: 4560.00 +/- 1837.65
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Alien-v5**
This is a trained model of a PPO agent playing Alien-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Alien-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Alien-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Alien-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Alien-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Alien-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Alien-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Asteroids-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3 | cleanrl | 2023-02-10T05:20:39Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Asteroids-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T05:20:34Z | ---
tags:
- Asteroids-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Asteroids-v5
type: Asteroids-v5
metrics:
- type: mean_reward
value: 17852.00 +/- 19061.85
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Asteroids-v5**
This is a trained model of a PPO agent playing Asteroids-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Asteroids-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Asteroids-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Asteroids-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Asteroids-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Asteroids-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 7680,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Asteroids-v5',
'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4, 5, 6],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1920,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 60,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6510,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
raw-vitor/danny | raw-vitor | 2023-02-10T05:05:33Z | 31 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-10T04:54:12Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### danny Dreambooth model trained by raw-vitor with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
pittawat/Reinforce-cart-pole | pittawat | 2023-02-10T05:02:42Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T05:02:29Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cart-pole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
swl-models/scabbard-v2.0 | swl-models | 2023-02-10T04:39:59Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-10T04:26:46Z | ---
license: creativeml-openrail-m
---
|
nolanaatama/izm | nolanaatama | 2023-02-10T04:36:43Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-10T04:34:12Z | ---
license: creativeml-openrail-m
---
|
peter-nagy/deep-grader-codebert-cpp | peter-nagy | 2023-02-10T04:36:03Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-10T04:22:33Z | ---
license: apache-2.0
---
Deep Grader is a programming language model leveraging large pre-trained models (CodeBERT, UniXcoder) fine-tuned on the task of Automatic Program Grading with Python and C++ programming languages.
For more information, see: [https://github.com/peter-nagy1/Deep-Grader](https://github.com/peter-nagy1/Deep-Grader) |
peter-nagy/deep-grader-unixcoder-cpp | peter-nagy | 2023-02-10T04:20:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-10T04:06:41Z | ---
license: apache-2.0
---
Deep Grader is a programming language model leveraging large pre-trained models (CodeBERT, UniXcoder) fine-tuned on the task of Automatic Program Grading with Python and C++ programming languages.
For more information, see: [https://github.com/peter-nagy1/Deep-Grader](https://github.com/peter-nagy1/Deep-Grader) |
yhchoi/distilbert-base-uncased-finetuned-emotion | yhchoi | 2023-02-10T03:51:58Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-10T02:28:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.1+cu110
- Datasets 2.9.0
- Tokenizers 0.13.2
|
peter-nagy/deep-grader-unixcoder-python | peter-nagy | 2023-02-10T03:51:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-10T02:25:25Z | ---
license: apache-2.0
---
Deep Grader is a programming language model leveraging large pre-trained models (CodeBERT, UniXcoder) fine-tuned on the task of Automatic Program Grading with Python and C++ programming languages.
For more information, see: [https://github.com/peter-nagy1/Deep-Grader](https://github.com/peter-nagy1/Deep-Grader) |
swl-models/AnyJuice-v3.2 | swl-models | 2023-02-10T03:46:40Z | 0 | 2 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-10T03:10:32Z | ---
license: creativeml-openrail-m
---
|
XPeng2022/fotorx | XPeng2022 | 2023-02-10T03:44:35Z | 5 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-09T09:12:31Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### fotorx Dreambooth model trained by XPeng2022
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
XPeng2022/hgs3 | XPeng2022 | 2023-02-10T03:44:06Z | 31 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-10T03:32:39Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### HGS3 Dreambooth model trained by XPeng2022
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
arnov/name-gender | arnov | 2023-02-10T03:13:20Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"gender",
"names",
"lgbtq+",
"zero-shot-classification",
"en",
"fr",
"es",
"de",
"dataset:openwebtext",
"region:us"
]
| zero-shot-classification | 2023-02-10T02:08:47Z | ---
datasets:
- openwebtext
language:
- en
- fr
- es
- de
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: zero-shot-classification
tags:
- gender
- names
- lgbtq+
--- |
gatardochi/Reinforce-CartPole-v1 | gatardochi | 2023-02-10T02:34:51Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T02:34:39Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
income/bpr-contriever-gpl-quora | income | 2023-02-10T02:33:32Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-02-10T02:33:19Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 59733 with parameters:
```
{'batch_size': 75, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`income.bpr.gpl.loss.BPRMarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 70000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
junjuice0/VOXO | junjuice0 | 2023-02-10T02:20:26Z | 108 | 46 | diffusers | [
"diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-03T09:58:41Z | ---
thumbnail: "https://media.discordapp.net/attachments/1002437703192821910/1073391952977989632/thumb.png"
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---

# VOXO
Merged model by junjuice0.
This model was originally created just for me, so I am not after quality and please don't expect too much.
I may release finetune version of this model in the future, but only God knows if I am willing to do it until then.
[JOIN US(ๆฅๆฌ่ช)](https://discord.gg/ai-art)
# VOXO-Vtuber (VOXO-v0-vtuber.safetensors)
This model can generate vtubers for Hololive and Nijisanji.
Some vtubers may or may not come out well.
It is recommended to give the name a weight of about 1.2 (e.g. (ange katrina:1.2))
# RECOMMENDED
It is recommended to use TIs such as bad-images or bad-prompt for negative prompts. Also, quality prompts (e.g. masterpiece, high quality) are not required.
The use of highres. fix may change the painting considerably, use according to your preference.
# HOW TO USE
The usage is the same as other diffusion models, and it would be easier to read other people's explanations than mine here.
|
linkin-13/xtremedistil-l12-h384-uncased-trivia-qa | linkin-13 | 2023-02-10T02:10:16Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-10T01:07:19Z | ---
tags:
- generated_from_trainer
model-index:
- name: result
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [microsoft/xtremedistil-l12-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l12-h384-uncased) on the TriviaQA dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2 |
sgoodfriend/dqn-sb3-SpaceInvadersNoFrameskip-v4 | sgoodfriend | 2023-02-10T02:01:56Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T02:01:22Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: dqn
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 539.50 +/- 121.23
name: mean_reward
verified: false
---
# **dqn** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **dqn** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CSAle/DilbertDiffusion2 | CSAle | 2023-02-10T02:00:58Z | 31 | 0 | diffusers | [
"diffusers",
"tensorboard",
"pytorch",
"stable-diffusion-v2-1-base",
"text-to-image",
"diffusion-models-class",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-10T01:59:57Z | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion-v2-1-base
- text-to-image
- diffusion-models-class
widget:
- text: dilbert walking his dog
---
# DreamBooth model for the Dilbert concept trained by CSAle on the CSAle/DilbertDiffusionDataset dataset.
This is a Stable Diffusion model fine-tuned on the Dilbert concept. It can be used by modifying the `instance_prompt`: **dilbert**
## Description
A DilbertDiffusion model
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('CSAle/DilbertDiffusion2')
image = pipeline().images[0]
image
```
|
WildBill258/ppo-LunarLander-v2 | WildBill258 | 2023-02-10T01:18:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T01:18:03Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.41 +/- 18.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
eormeno12/platzi_vit_model | eormeno12 | 2023-02-10T00:54:04Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-02-10T00:07:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: platzi_vit_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9924812030075187
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi_vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0328
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1427 | 3.85 | 500 | 0.0328 | 0.9925 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
MagicalGirlsFC/Magical_Girls_Football_Club-Mix | MagicalGirlsFC | 2023-02-10T00:47:38Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-10T00:46:08Z | ---
license: creativeml-openrail-m
---
|
figfig/local_test_model | figfig | 2023-02-10T00:41:23Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"en",
"dataset:figfig/restaurant_order_local_test",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-10T00:30:17Z | ---
language:
- en
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- figfig/restaurant_order_local_test
metrics:
- wer
model-index:
- name: restaurant_local_test_model
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: local_test_data
type: figfig/restaurant_order_local_test
args: 'config: en, split: test'
metrics:
- name: Wer
type: wer
value: 78.57142857142857
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# restaurant_local_test_model
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the local_test_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5435
- Wer: 78.5714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 10.0 | 10 | 2.2425 | 7.1429 |
| No log | 20.0 | 20 | 0.6651 | 0.0 |
| 2.4375 | 30.0 | 30 | 0.5776 | 35.7143 |
| 2.4375 | 40.0 | 40 | 0.5435 | 78.5714 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gatardochi/dqn-SpaceInvadersNoFrameskip-v4 | gatardochi | 2023-02-10T00:38:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T00:37:54Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 736.00 +/- 244.23
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gatardochi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gatardochi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga gatardochi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
eshwarprasadS/lunarlanderv2-dqn | eshwarprasadS | 2023-02-10T00:25:08Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-10T00:24:27Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 141.31 +/- 65.12
name: mean_reward
verified: false
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bongsoo/albert-small-kor-sbert-v1 | bongsoo | 2023-02-10T00:09:50Z | 4 | 3 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"albert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-01-11T04:15:28Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# albert-small-kor-sbert-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
[albert-small-kor-v1](https://huggingface.co/bongsoo/albert-small-kor-v1) ๋ชจ๋ธ์ sentencebert๋ก ๋ง๋ ๋ชจ๋ธ.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bongsoo/albert-small-kor-sbert-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bongsoo/albert-small-kor-sbert-v1')
model = AutoModel.from_pretrained('bongsoo/albert-small-kor-sbert-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
- ์ฑ๋ฅ ์ธก์ ์ ์ํ ๋ง๋ญ์น๋, ์๋ ํ๊ตญ์ด (kor), ์์ด(en) ํ๊ฐ ๋ง๋ญ์น๋ฅผ ์ด์ฉํจ
<br> ํ๊ตญ์ด : **korsts(1,379์๋ฌธ์ฅ)** ์ **klue-sts(519์๋ฌธ์ฅ)**
<br> ์์ด : [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt)(1,376์๋ฌธ์ฅ) ์ [glue:stsb](https://huggingface.co/datasets/glue/viewer/stsb/validation) (1,500์๋ฌธ์ฅ)
- ์ฑ๋ฅ ์งํ๋ **cosin.spearman**
- ํ๊ฐ ์ธก์ ์ฝ๋๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sbert-test3.ipynb) ์ฐธ์กฐ
-
|๋ชจ๋ธ |korsts|klue-sts|glue(stsb)|stsb_multi_mt(en)|
|:--------|------:|--------:|--------------:|------------:|
|distiluse-base-multilingual-cased-v2 |0.7475 |0.7855 |0.8193 |0.8075|
|paraphrase-multilingual-mpnet-base-v2 |0.8201 |0.7993 |0.8907 |0.8682|
|bongsoo/moco-sentencedistilbertV2.1 |0.8390 |0.8767 |0.8805 |0.8548|
|bongsoo/albert-small-kor-sbert-v1 |0.8305 |0.8588 |0.8419 |0.7965|
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
- [albert-small-kor-v1](https://huggingface.co/bongsoo/albert-small-kor-v1) ๋ชจ๋ธ์ sts(10)-distil(10)-nli(3)-sts(10) ํ๋ จ ์ํด
The model was trained with the parameters:
**๊ณตํต**
- **do_lower_case=1, correct_bios=0, polling_mode=cls**
**1.STS**
- ๋ง๋ญ์น : korsts(5,749) + kluestsV1.1(11,668) + stsb_multi_mt(5,749) + mteb/sickr-sts(9,927) + glue stsb(5,749) (์ด:38,842)
- Param : **lr: 1e-4, eps: 1e-6, warm_step=10%, epochs: 10, train_batch: 32, eval_batch: 64, max_token_len: 72**
- ํ๋ จ์ฝ๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sentece-bert-sts.ipynb) ์ฐธ์กฐ
**2.distilation**
- ๊ต์ฌ ๋ชจ๋ธ : paraphrase-multilingual-mpnet-base-v2(max_token_len:128)
- ๋ง๋ญ์น : news_talk_en_ko_train.tsv (์์ด-ํ๊ตญ์ด ๋ํ-๋ด์ค ๋ณ๋ ฌ ๋ง๋ญ์น : 1.38M)
- Param : **lr: 5e-5, eps: 1e-8, epochs: 10, train_batch: 32, eval/test_batch: 64, max_token_len: 128(๊ต์ฌ๋ชจ๋ธ์ด 128์ด๋ฏ๋ก ๋ง์ถฐ์ค)**
- ํ๋ จ์ฝ๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sbert-distillaton.ipynb) ์ฐธ์กฐ
**3.NLI**
- ๋ง๋ญ์น : ํ๋ จ(967,852) : kornli(550,152), kluenli(24,998), glue-mnli(392,702) / ํ๊ฐ(3,519) : korsts(1,500), kluests(519), gluests(1,500) ()
- HyperParameter : **lr: 3e-5, eps: 1e-8, warm_step=10%, epochs: 3, train/eval_batch: 64, max_token_len: 128**
- ํ๋ จ์ฝ๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sentence-bert-nli.ipynb) ์ฐธ์กฐ
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': True}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
bongsoo |
bongsoo/klue-sbert-v1 | bongsoo | 2023-02-10T00:07:24Z | 76 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-01-13T02:48:18Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# klue-sbert-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
klue/bert-base ๋ชจ๋ธ์ sentencebert๋ก ํ์ธํ๋ํ ๋ชจ๋ธ
## Evaluation Results
- ์ฑ๋ฅ ์ธก์ ์ ์ํ ๋ง๋ญ์น๋, ์๋ ํ๊ตญ์ด (kor), ์์ด(en) ํ๊ฐ ๋ง๋ญ์น๋ฅผ ์ด์ฉํจ
<br> ํ๊ตญ์ด : **korsts(1,379์๋ฌธ์ฅ)** ์ **klue-sts(519์๋ฌธ์ฅ)**
<br> ์์ด : [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt)(1,376์๋ฌธ์ฅ) ์ [glue:stsb](https://huggingface.co/datasets/glue/viewer/stsb/validation) (1,500์๋ฌธ์ฅ)
- ์ฑ๋ฅ ์งํ๋ **cosin.spearman**
- ํ๊ฐ ์ธก์ ์ฝ๋๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sbert-test3.ipynb) ์ฐธ์กฐ
-
|๋ชจ๋ธ |korsts|klue-sts|glue(stsb)|stsb_multi_mt(en)|
|:--------|------:|--------:|--------------:|------------:|
|distiluse-base-multilingual-cased-v2 |0.7475 |0.7855 |0.8193 |0.8075|
|paraphrase-multilingual-mpnet-base-v2 |0.8201 |0.7993 |0.8907 |0.8682|
|bongsoo/albert-small-kor-sbert-v1 |0.8305 |0.8588 |0.8419 |0.7965|
|bongsoo/kpf-sbert-v1.0 |0.8590 |0.8924 |0.8840 |0.8531|
|**bongsoo/klue-sbert-v1.0** |0.8529 |0.8952 |0.8813 |0.8469|
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
- [klue/bert-base](https://huggingface.co/klue/bert-base) ๋ชจ๋ธ์ sts(10)-distil(10)-nli(3)-sts(10) ํ๋ จ ์ํด
The model was trained with the parameters:
**๊ณตํต**
- **do_lower_case=1, correct_bios=0, polling_mode=mean**
**1.STS**
- ๋ง๋ญ์น : korsts(5,749) + kluestsV1.1(11,668) + stsb_multi_mt(5,749) + mteb/sickr-sts(9,927) + glue stsb(5,749) (์ด:38,842)
- Param : **lr: 1e-4, eps: 1e-6, warm_step=10%, epochs: 10, train_batch: 128, eval_batch: 64, max_token_len: 72**
- ํ๋ จ์ฝ๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sentece-bert-sts.ipynb) ์ฐธ์กฐ
**2.distilation**
- ๊ต์ฌ ๋ชจ๋ธ : paraphrase-multilingual-mpnet-base-v2(max_token_len:128)
- ๋ง๋ญ์น : news_talk_en_ko_train.tsv (์์ด-ํ๊ตญ์ด ๋ํ-๋ด์ค ๋ณ๋ ฌ ๋ง๋ญ์น : 1.38M)
- Param : **lr: 5e-5, eps: 1e-8, epochs: 10, train_batch: 128, eval/test_batch: 64, max_token_len: 128(๊ต์ฌ๋ชจ๋ธ์ด 128์ด๋ฏ๋ก ๋ง์ถฐ์ค)**
- ํ๋ จ์ฝ๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sbert-distillaton.ipynb) ์ฐธ์กฐ
**3.NLI**
- ๋ง๋ญ์น : ํ๋ จ(967,852) : kornli(550,152), kluenli(24,998), glue-mnli(392,702) / ํ๊ฐ(3,519) : korsts(1,500), kluests(519), gluests(1,500) ()
- HyperParameter : **lr: 3e-5, eps: 1e-8, warm_step=10%, epochs: 3, train/eval_batch: 64, max_token_len: 128**
- ํ๋ จ์ฝ๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sentence-bert-nli.ipynb) ์ฐธ์กฐ
-
## Citing & Authors
bongsoo |
bongsoo/kpf-sbert-v1.1 | bongsoo | 2023-02-09T23:59:32Z | 47 | 4 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-01-13T05:00:59Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# kpf-sbert-v1.1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
jinmang2/kpfbert ๋ชจ๋ธ์ sentencebert๋ก ํ์ธ๋๋ํ ๋ชจ๋ธ
(kpf-sbert-v1 ์์ NLI-STS ํ๋ จ์ 1๋ฒ ๋ ์ํด)
## Evaluation Results
- ์ฑ๋ฅ ์ธก์ ์ ์ํ ๋ง๋ญ์น๋, ์๋ ํ๊ตญ์ด (kor), ์์ด(en) ํ๊ฐ ๋ง๋ญ์น๋ฅผ ์ด์ฉํจ
<br> ํ๊ตญ์ด : **korsts(1,379์๋ฌธ์ฅ)** ์ **klue-sts(519์๋ฌธ์ฅ)**
<br> ์์ด : [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt)(1,376์๋ฌธ์ฅ) ์ [glue:stsb](https://huggingface.co/datasets/glue/viewer/stsb/validation) (1,500์๋ฌธ์ฅ)
- ์ฑ๋ฅ ์งํ๋ **cosin.spearman**
- ํ๊ฐ ์ธก์ ์ฝ๋๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sbert-test3.ipynb) ์ฐธ์กฐ
-
|๋ชจ๋ธ |korsts|klue-sts|glue(stsb)|stsb_multi_mt(en)|
|:--------|------:|--------:|--------------:|------------:|
|distiluse-base-multilingual-cased-v2 |0.7475 |0.7855 |0.8193 |0.8075|
|paraphrase-multilingual-mpnet-base-v2 |0.8201 |0.7993 |0.8907 |0.8682|
|bongsoo/albert-small-kor-sbert-v1 |0.8305 |0.8588 |0.8419 |0.7965|
|bongsoo/klue-sbert-v1.0 |0.8529 |0.8952 |0.8813 |0.8469|
|bongsoo/kpf-sbert-v1.0 |0.8590 |0.8924 |0.8840 |0.8531|
|**bongsoo/kpf-sbert-v1.1** |0.8750 |0.8900 |0.8863 |0.8554|
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
- [jinmang2/kpfbert](https://huggingface.co/jinmang2/kpfbert) ๋ชจ๋ธ์ sts(10)-distil(10)-nli(3)-sts(10)-nli(3)-sts(10) ํ๋ จ ์ํด
The model was trained with the parameters:
**๊ณตํต**
- **do_lower_case=1, correct_bios=0, polling_mode=mean**
**1.STS**
- ๋ง๋ญ์น : korsts(5,749) + kluestsV1.1(11,668) + stsb_multi_mt(5,749) + mteb/sickr-sts(9,927) + glue stsb(5,749) (์ด:38,842)
- Param : **lr: 1e-4, eps: 1e-6, warm_step=10%, epochs: 10, train_batch: 128, eval_batch: 64, max_token_len: 72**
- ํ๋ จ์ฝ๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sentece-bert-sts.ipynb) ์ฐธ์กฐ
**2.distilation**
- ๊ต์ฌ ๋ชจ๋ธ : paraphrase-multilingual-mpnet-base-v2(max_token_len:128)
- ๋ง๋ญ์น : news_talk_en_ko_train.tsv (์์ด-ํ๊ตญ์ด ๋ํ-๋ด์ค ๋ณ๋ ฌ ๋ง๋ญ์น : 1.38M)
- Param : **lr: 5e-5, eps: 1e-8, epochs: 10, train_batch: 128, eval/test_batch: 64, max_token_len: 128(๊ต์ฌ๋ชจ๋ธ์ด 128์ด๋ฏ๋ก ๋ง์ถฐ์ค)**
- ํ๋ จ์ฝ๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sbert-distillaton.ipynb) ์ฐธ์กฐ
**3.NLI**
- ๋ง๋ญ์น : ํ๋ จ(967,852) : kornli(550,152), kluenli(24,998), glue-mnli(392,702) / ํ๊ฐ(3,519) : korsts(1,500), kluests(519), gluests(1,500) ()
- HyperParameter : **lr: 3e-5, eps: 1e-8, warm_step=10%, epochs: 3, train/eval_batch: 64, max_token_len: 128**
- ํ๋ จ์ฝ๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sentence-bert-nli.ipynb) ์ฐธ์กฐ
-
## Citing & Authors
bongsoo |
UCSD-VA-health/RadBERT-2m | UCSD-VA-health | 2023-02-09T23:24:34Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-06T07:21:44Z | ---
license: apache-2.0
---
## RadBERT-2m
This is a base model of Radiology-BERT from UC San Diego and VA healthcare system. It is initialized from BERT-base-uncased and further trained with 2 million radiology reports deidentified from US VA hospital. The model achieves stronger medical language understanding performance than previous medical domain models such as BioBERT, Clinical-BERT, BLUE-BERT and BioMed-RoBERTa.
Performances are evaluated on three tasks:
(a) abnormal sentence classification: sentence classification in radiology reports as reporting abnormal or normal findings;
(b) report coding: Assign a diagnostic code to a given radiology report for five different coding systems;
(c) report summarization: given the findings section of a radiology report, extractively select key sentences that summarized the findings.
It also shows superior performance on other radiology NLP tasks which are not reported in the paper.
For details, check out the paper here:
[RadBERT: Adapting transformer-based language models to radiology](https://pubs.rsna.org/doi/abs/10.1148/ryai.210258)
### How to use
Here is an example of how to use this model to extract the features of a given text in PyTorch:
```python
from transformers import AutoConfig, AutoTokenizer, AutoModel
config = AutoConfig.from_pretrained('zzxslp/RadBERT-RoBERTa-4m')
tokenizer = AutoTokenizer.from_pretrained('zzxslp/RadBERT-RoBERTa-4m')
model = AutoModel.from_pretrained('zzxslp/RadBERT-RoBERTa-4m', config=config)
text = "Replace me by any medical text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### BibTeX entry and citation info
If you use the model, please cite our paper:
```bibtex
@article{yan2022radbert,
title={RadBERT: Adapting transformer-based language models to radiology},
author={Yan, An and McAuley, Julian and Lu, Xing and Du, Jiang and Chang, Eric Y and Gentili, Amilcare and Hsu, Chun-Nan},
journal={Radiology: Artificial Intelligence},
volume={4},
number={4},
pages={e210258},
year={2022},
publisher={Radiological Society of North America}
}
``` |
Qalam/Lei | Qalam | 2023-02-09T23:14:22Z | 0 | 1 | null | [
"text-to-image",
"arxiv:2006.11239",
"arxiv:2010.02502",
"arxiv:2202.09778",
"arxiv:2204.13902",
"license:apache-2.0",
"region:us"
]
| text-to-image | 2023-02-09T22:51:29Z | ---
license: apache-2.0
pipeline_tag: text-to-image
---
<p align="center">
<br>
<img src="./docs/source/en/imgs/diffusers_library.jpg" width="400"/>
<br>
<p>
<p align="center">
<a href="https://github.com/huggingface/diffusers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/datasets.svg?color=blue">
</a>
<a href="https://github.com/huggingface/diffusers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/diffusers.svg">
</a>
<a href="CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg">
</a>
</p>
๐ค Diffusers provides pretrained diffusion models across multiple modalities, such as vision and audio, and serves
as a modular toolbox for inference and training of diffusion models.
More precisely, ๐ค Diffusers offers:
- State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines)). Check [this overview](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/README.md#pipelines-summary) to see all supported pipelines and their corresponding official papers.
- Various noise schedulers that can be used interchangeably for the preferred speed vs. quality trade-off in inference (see [src/diffusers/schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers)).
- Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system (see [src/diffusers/models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models)).
- Training examples to show how to train the most popular diffusion model tasks (see [examples](https://github.com/huggingface/diffusers/tree/main/examples), *e.g.* [unconditional-image-generation](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation)).
## Installation
### For PyTorch
**With `pip`** (official package)
```bash
pip install --upgrade diffusers[torch]
```
**With `conda`** (maintained by the community)
```sh
conda install -c conda-forge diffusers
```
### For Flax
**With `pip`**
```bash
pip install --upgrade diffusers[flax]
```
**Apple Silicon (M1/M2) support**
Please, refer to [the documentation](https://huggingface.co/docs/diffusers/optimization/mps).
## Contributing
We โค๏ธ contributions from the open-source community!
If you want to contribute to this library, please check out our [Contribution guide](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md).
You can look out for [issues](https://github.com/huggingface/diffusers/issues) you'd like to tackle to contribute to the library.
- See [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) for general opportunities to contribute
- See [New model/pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) to contribute exciting new diffusion models / diffusion pipelines
- See [New scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22)
Also, say ๐ in our public Discord channel <a href="https://discord.gg/G7tWnz98XR"><img alt="Join us on Discord" src="https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white"></a>. We discuss the hottest trends about diffusion models, help each other with contributions, personal projects or
just hang out โ.
## Quickstart
In order to get started, we recommend taking a look at two notebooks:
- The [Getting started with Diffusers](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) notebook, which showcases an end-to-end example of usage for diffusion models, schedulers and pipelines.
Take a look at this notebook to learn how to use the pipeline abstraction, which takes care of everything (model, scheduler, noise handling) for you, and also to understand each independent building block in the library.
- The [Training a diffusers model](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) notebook summarizes diffusion models training methods. This notebook takes a step-by-step approach to training your
diffusion models on an image dataset, with explanatory graphics.
## Stable Diffusion is fully compatible with `diffusers`!
Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [LAION](https://laion.ai/) and [RunwayML](https://runwayml.com/). It's trained on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 4GB VRAM.
See the [model card](https://huggingface.co/CompVis/stable-diffusion) for more information.
### Text-to-Image generation with Stable Diffusion
First let's install
```bash
pip install --upgrade diffusers transformers accelerate
```
We recommend using the model in [half-precision (`fp16`)](https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/) as it gives almost always the same results as full
precision while being roughly twice as fast and requiring half the amount of GPU RAM.
```python
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
```
#### Running the model locally
You can also simply download the model folder and pass the path to the local folder to the `StableDiffusionPipeline`.
```
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
```
Assuming the folder is stored locally under `./stable-diffusion-v1-5`, you can run stable diffusion
as follows:
```python
pipe = StableDiffusionPipeline.from_pretrained("./stable-diffusion-v1-5")
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
```
If you are limited by GPU memory, you might want to consider chunking the attention computation in addition
to using `fp16`.
The following snippet should result in less than 4GB VRAM.
```python
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
pipe.enable_attention_slicing()
image = pipe(prompt).images[0]
```
If you wish to use a different scheduler (e.g.: DDIM, LMS, PNDM/PLMS), you can instantiate
it before the pipeline and pass it to `from_pretrained`.
```python
from diffusers import LMSDiscreteScheduler
pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config)
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
If you want to run Stable Diffusion on CPU or you want to have maximum precision on GPU,
please run the model in the default *full-precision* setting:
```python
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
# disable the following line if you run on CPU
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
### JAX/Flax
Diffusers offers a JAX / Flax implementation of Stable Diffusion for very fast inference. JAX shines specially on TPU hardware because each TPU server has 8 accelerators working in parallel, but it runs great on GPUs too.
Running the pipeline with the default PNDMScheduler:
```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
from diffusers import FlaxStableDiffusionPipeline
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", revision="flax", dtype=jax.numpy.bfloat16
)
prompt = "a photo of an astronaut riding a horse on mars"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids)
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
**Note**:
If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch.
```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
from diffusers import FlaxStableDiffusionPipeline
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16
)
prompt = "a photo of an astronaut riding a horse on mars"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids)
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
Diffusers also has a Image-to-Image generation pipeline with Flax/Jax
```python
import jax
import numpy as np
import jax.numpy as jnp
from flax.jax_utils import replicate
from flax.training.common_utils import shard
import requests
from io import BytesIO
from PIL import Image
from diffusers import FlaxStableDiffusionImg2ImgPipeline
def create_key(seed=0):
return jax.random.PRNGKey(seed)
rng = create_key(0)
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
init_img = Image.open(BytesIO(response.content)).convert("RGB")
init_img = init_img.resize((768, 512))
prompts = "A fantasy landscape, trending on artstation"
pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", revision="flax",
dtype=jnp.bfloat16,
)
num_samples = jax.device_count()
rng = jax.random.split(rng, jax.device_count())
prompt_ids, processed_image = pipeline.prepare_inputs(prompt=[prompts]*num_samples, image = [init_img]*num_samples)
p_params = replicate(params)
prompt_ids = shard(prompt_ids)
processed_image = shard(processed_image)
output = pipeline(
prompt_ids=prompt_ids,
image=processed_image,
params=p_params,
prng_seed=rng,
strength=0.75,
num_inference_steps=50,
jit=True,
height=512,
width=768).images
output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:])))
```
Diffusers also has a Text-guided inpainting pipeline with Flax/Jax
```python
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
import PIL
import requests
from io import BytesIO
from diffusers import FlaxStableDiffusionInpaintPipeline
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained("xvjiarui/stable-diffusion-2-inpainting")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
init_image = num_samples * [init_image]
mask_image = num_samples * [mask_image]
prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs(prompt, init_image, mask_image)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids)
processed_masked_images = shard(processed_masked_images)
processed_masks = shard(processed_masks)
images = pipeline(prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
```
### Image-to-Image text-guided generation with Stable Diffusion
The `StableDiffusionImg2ImgPipeline` lets you pass a text prompt and an initial image to condition the generation of new images.
```python
import requests
import torch
from PIL import Image
from io import BytesIO
from diffusers import StableDiffusionImg2ImgPipeline
# load the pipeline
device = "cuda"
model_id_or_path = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
# or download via git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
# and pass `model_id_or_path="./stable-diffusion-v1-5"`.
pipe = pipe.to(device)
# let's download an initial image
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((768, 512))
prompt = "A fantasy landscape, trending on artstation"
images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
images[0].save("fantasy_landscape.png")
```
You can also run this example on colab [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
### In-painting using Stable Diffusion
The `StableDiffusionInpaintPipeline` lets you edit specific parts of an image by providing a mask and a text prompt.
```python
import PIL
import requests
import torch
from io import BytesIO
from diffusers import StableDiffusionInpaintPipeline
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
```
### Tweak prompts reusing seeds and latents
You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked.
Please have a look at [Reusing seeds for deterministic generation](https://huggingface.co/docs/diffusers/main/en/using-diffusers/reusing_seeds).
## Fine-Tuning Stable Diffusion
Fine-tuning techniques make it possible to adapt Stable Diffusion to your own dataset, or add new subjects to it. These are some of the techniques supported in `diffusers`:
Textual Inversion is a technique for capturing novel concepts from a small number of example images in a way that can later be used to control text-to-image pipelines. It does so by learning new 'words' in the embedding space of the pipeline's text encoder. These special words can then be used within text prompts to achieve very fine-grained control of the resulting images.
- Textual Inversion. Capture novel concepts from a small set of sample images, and associate them with new "words" in the embedding space of the text encoder. Please, refer to [our training examples](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion) or [documentation](https://huggingface.co/docs/diffusers/training/text_inversion) to try for yourself.
- Dreambooth. Another technique to capture new concepts in Stable Diffusion. This method fine-tunes the UNet (and, optionally, also the text encoder) of the pipeline to achieve impressive results. Please, refer to [our training example](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) and [training report](https://huggingface.co/blog/dreambooth) for additional details and training recommendations.
- Full Stable Diffusion fine-tuning. If you have a more sizable dataset with a specific look or style, you can fine-tune Stable Diffusion so that it outputs images following those examples. This was the approach taken to create [a Pokรฉmon Stable Diffusion model](https://huggingface.co/justinpinkney/pokemon-stable-diffusion) (by Justing Pinkney / Lambda Labs), [a Japanese specific version of Stable Diffusion](https://huggingface.co/spaces/rinna/japanese-stable-diffusion) (by [Rinna Co.](https://github.com/rinnakk/japanese-stable-diffusion/) and others. You can start at [our text-to-image fine-tuning example](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) and go from there.
## Stable Diffusion Community Pipelines
The release of Stable Diffusion as an open source model has fostered a lot of interesting ideas and experimentation.
Our [Community Examples folder](https://github.com/huggingface/diffusers/tree/main/examples/community) contains many ideas worth exploring, like interpolating to create animated videos, using CLIP Guidance for additional prompt fidelity, term weighting, and much more! [Take a look](https://huggingface.co/docs/diffusers/using-diffusers/custom_pipeline_overview) and [contribute your own](https://huggingface.co/docs/diffusers/using-diffusers/contribute_pipeline).
## Other Examples
There are many ways to try running Diffusers! Here we outline code-focused tools (primarily using `DiffusionPipeline`s and Google Colab) and interactive web-tools.
### Running Code
If you want to run the code yourself ๐ป, you can try out:
- [Text-to-Image Latent Diffusion](https://huggingface.co/CompVis/ldm-text2im-large-256)
```python
# !pip install diffusers["torch"] transformers
from diffusers import DiffusionPipeline
device = "cuda"
model_id = "CompVis/ldm-text2im-large-256"
# load model and scheduler
ldm = DiffusionPipeline.from_pretrained(model_id)
ldm = ldm.to(device)
# run pipeline in inference (sample random noise and denoise)
prompt = "A painting of a squirrel eating a burger"
image = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images[0]
# save image
image.save("squirrel.png")
```
- [Unconditional Diffusion with discrete scheduler](https://huggingface.co/google/ddpm-celebahq-256)
```python
# !pip install diffusers["torch"]
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "google/ddpm-celebahq-256"
device = "cuda"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
ddpm.to(device)
# run pipeline in inference (sample random noise and denoise)
image = ddpm().images[0]
# save image
image.save("ddpm_generated_image.png")
```
- [Unconditional Latent Diffusion](https://huggingface.co/CompVis/ldm-celebahq-256)
- [Unconditional Diffusion with continuous scheduler](https://huggingface.co/google/ncsnpp-ffhq-1024)
**Other Image Notebooks**:
* [image-to-image generation with Stable Diffusion](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) ,
* [tweak images via repeated Stable Diffusion seeds](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb) ,
**Diffusers for Other Modalities**:
* [Molecule conformation generation](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/geodiff_molecule_conformation.ipynb) ,
* [Model-based reinforcement learning](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_with_diffusers.ipynb) ,
### Web Demos
If you just want to play around with some web demos, you can try out the following ๐ Spaces:
| Model | Hugging Face Spaces |
|-------------------------------- |------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Text-to-Image Latent Diffusion | [](https://huggingface.co/spaces/CompVis/text2img-latent-diffusion) |
| Faces generator | [](https://huggingface.co/spaces/CompVis/celeba-latent-diffusion) |
| DDPM with different schedulers | [](https://huggingface.co/spaces/fusing/celeba-diffusion) |
| Conditional generation from sketch | [](https://huggingface.co/spaces/huggingface/diffuse-the-rest) |
| Composable diffusion | [](https://huggingface.co/spaces/Shuang59/Composable-Diffusion) |
## Definitions
**Models**: Neural network that models $p_\theta(\mathbf{x}_{t-1}|\mathbf{x}_t)$ (see image below) and is trained end-to-end to *denoise* a noisy input to an image.
*Examples*: UNet, Conditioned UNet, 3D UNet, Transformer UNet
<p align="center">
<img src="https://user-images.githubusercontent.com/10695622/174349667-04e9e485-793b-429a-affe-096e8199ad5b.png" width="800"/>
<br>
<em> Figure from DDPM paper (https://arxiv.org/abs/2006.11239). </em>
<p>
**Schedulers**: Algorithm class for both **inference** and **training**.
The class provides functionality to compute previous image according to alpha, beta schedule as well as predict noise for training. Also known as **Samplers**.
*Examples*: [DDPM](https://arxiv.org/abs/2006.11239), [DDIM](https://arxiv.org/abs/2010.02502), [PNDM](https://arxiv.org/abs/2202.09778), [DEIS](https://arxiv.org/abs/2204.13902)
<p align="center">
<img src="https://user-images.githubusercontent.com/10695622/174349706-53d58acc-a4d1-4cda-b3e8-432d9dc7ad38.png" width="800"/>
<br>
<em> Sampling and training algorithms. Figure from DDPM paper (https://arxiv.org/abs/2006.11239). </em>
<p>
**Diffusion Pipeline**: End-to-end pipeline that includes multiple diffusion models, possible text encoders, ...
*Examples*: Glide, Latent-Diffusion, Imagen, DALL-E 2
<p align="center">
<img src="https://user-images.githubusercontent.com/10695622/174348898-481bd7c2-5457-4830-89bc-f0907756f64c.jpeg" width="550"/>
<br>
<em> Figure from ImageGen (https://imagen.research.google/). </em>
<p>
## Philosophy
- Readability and clarity is preferred over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. *E.g.*, the provided [schedulers](https://github.com/huggingface/diffusers/tree/main/src/diffusers/schedulers) are separated from the provided [models](https://github.com/huggingface/diffusers/tree/main/src/diffusers/models) and provide well-commented code that can be read alongside the original paper.
- Diffusers is **modality independent** and focuses on providing pretrained models and tools to build systems that generate **continuous outputs**, *e.g.* vision and audio.
- Diffusion models and schedulers are provided as concise, elementary building blocks. In contrast, diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementation and can include components of another library, such as text-encoders. Examples for diffusion pipelines are [Glide](https://github.com/openai/glide-text2im) and [Latent Diffusion](https://github.com/CompVis/latent-diffusion).
## In the works
For the first release, ๐ค Diffusers focuses on text-to-image diffusion techniques. However, diffusers can be used for much more than that! Over the upcoming releases, we'll be focusing on:
- Diffusers for audio
- Diffusers for reinforcement learning (initial work happening in https://github.com/huggingface/diffusers/pull/105).
- Diffusers for video generation
- Diffusers for molecule generation (initial work happening in https://github.com/huggingface/diffusers/pull/54)
A few pipeline components are already being worked on, namely:
- BDDMPipeline for spectrogram-to-sound vocoding
- GLIDEPipeline to support OpenAI's GLIDE model
- Grad-TTS for text to audio generation / conditional audio generation
We want diffusers to be a toolbox useful for diffusers models in general; if you find yourself limited in any way by the current API, or would like to see additional models, schedulers, or techniques, please open a [GitHub issue](https://github.com/huggingface/diffusers/issues) mentioning what you would like to see.
## Credits
This library concretizes previous work by many different authors and would not have been possible without their great research and implementations. We'd like to thank, in particular, the following implementations which have helped us in our development and without which the API could not have been as polished today:
- @CompVis' latent diffusion models library, available [here](https://github.com/CompVis/latent-diffusion)
- @hojonathanho original DDPM implementation, available [here](https://github.com/hojonathanho/diffusion) as well as the extremely useful translation into PyTorch by @pesser, available [here](https://github.com/pesser/pytorch_diffusion)
- @ermongroup's DDIM implementation, available [here](https://github.com/ermongroup/ddim).
- @yang-song's Score-VE and Score-VP implementations, available [here](https://github.com/yang-song/score_sde_pytorch)
We also want to thank @heejkoo for the very helpful overview of papers, code and resources on diffusion models, available [here](https://github.com/heejkoo/Awesome-Diffusion-Models) as well as @crowsonkb and @rromb for useful discussions and insights.
## Citation
```bibtex
@misc{von-platen-etal-2022-diffusers,
author = {Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Thomas Wolf},
title = {Diffusers: State-of-the-art diffusion models},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/diffusers}}
}
``` |
danielpleus/PlattGPT | danielpleus | 2023-02-09T23:13:39Z | 104 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-04T20:25:23Z | ---
widget:
- text: "Brad Pitt is en Schauspeler. He hett speelt"
example_title: "Brad Pitt"
inference:
parameters:
max_length: 100
no_repeat_ngram_size: 1
--- |
peteralexandercharles/Bender | peteralexandercharles | 2023-02-09T23:07:14Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-09T23:07:14Z | ---
license: creativeml-openrail-m
---
|
qpham001/TriviaQA_NLP4Web_Group12 | qpham001 | 2023-02-09T22:58:39Z | 92 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-07T10:21:42Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: result
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jayeshvpatil/ppo-LunarLander-v2 | jayeshvpatil | 2023-02-09T22:33:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-09T01:59:23Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.45 +/- 22.42
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
robinsk8a/ppo-SnowballTarget | robinsk8a | 2023-02-09T21:48:56Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-09T21:48:45Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: robinsk8a/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
joelniklaus/legal-greek-roberta-base | joelniklaus | 2023-02-09T21:35:21Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-06T02:37:12Z | ---
tags:
- generated_from_trainer
model-index:
- name: legal-greek-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal-greek-roberta-base
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: tpu
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 200000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.8724 | 12.0 | 50000 | 0.6730 |
| 0.7713 | 24.0 | 100000 | 0.5763 |
| 0.7186 | 36.0 | 150000 | 0.5396 |
| 0.7152 | 48.0 | 200000 | 0.5247 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.9.0
- Tokenizers 0.12.0
|
SashkaHavr/NLP4Web_Home_Exercise6_Group13 | SashkaHavr | 2023-02-09T21:14:21Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-08T15:20:16Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: NLP4Web_Home_Exercise6_Group13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP4Web_Home_Exercise6_Group13
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Rolo/ppo-PyramidsTraining | Rolo | 2023-02-09T21:04:40Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-02-09T21:04:32Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: Rolo/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
02shanky/finetuned-twitter-xlm-roberta-base-emotion | 02shanky | 2023-02-09T21:01:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-09T20:20:54Z | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: finetuned-twitter-xlm-roberta-base-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9305
- name: F1
type: f1
value: 0.9306713707413102
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-twitter-xlm-roberta-base-emotion
This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1793
- Accuracy: 0.9305
- F1: 0.9307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
cfisicaro/poca-SoccerTwos | cfisicaro | 2023-02-09T20:59:54Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-09T20:59:41Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: cfisicaro/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
LarryAIDraw/lenaeightysix-21000 | LarryAIDraw | 2023-02-09T20:44:54Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-09T18:47:43Z | ---
license: creativeml-openrail-m
---
my second hypernetwork.i think soso but having some effect.
masterpiece,best quality,art by lenaeightysix,1girl,ahoge,very long hair,silver hair, long sleeves,hair between eyes, bangs,medium breasts, buttons,belt,thighhighs,military uniform,pantyhose,looking at viewer |
Sjors05/rmix_Sjors | Sjors05 | 2023-02-09T20:44:04Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-09T19:09:13Z | ---
license: creativeml-openrail-m
---
|
robsoneng/ppo-LunarLander-v2 | robsoneng | 2023-02-09T20:35:59Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-09T20:35:24Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.85 +/- 18.37
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SfinOe/dreamlike_2.0 | SfinOe | 2023-02-09T20:33:54Z | 20 | 8 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"photorealistic",
"photoreal",
"en",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-09T20:26:14Z | ---
language:
- en
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- photorealistic
- photoreal
- diffusers
inference: false
---
# Dreamlike Photoreal 2.0 is a photorealistic model based on Stable Diffusion 1.5, made by [dreamlike.art](https://dreamlike.art/).
# If you want to use dreamlike models on your website/app/etc., check the license at the bottom first!
Warning: This model is horny! Add "nude, naked" to the negative prompt if want to avoid NSFW.
You can add **photo** to your prompt to make your gens look more photorealistic.
Non-square aspect ratios work better for some prompts. If you want a portrait photo, try using a vertical aspect ratio. If you want a landscape photo, try using a horizontal aspect ratio.
This model was trained on 768x768px images, so use 768x768px, 640x896px, 896x640px, etc. It also works pretty good with higher resolutions such as 768x1024px or 1024x768px.
### Examples
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/preview1.jpg" style="max-width: 800px;" width="100%"/>
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/preview2.jpg" style="max-width: 800px;" width="100%"/>
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/preview3.jpg" style="max-width: 800px;" width="100%"/>
### dreamlike.art
You can use this model for free on [dreamlike.art](https://dreamlike.art/)!
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/dreamlike.jpg" style="max-width: 1000px;" width="100%"/>
### CKPT
[Download dreamlike-photoreal-2.0.ckpt (2.13GB)](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/dreamlike-photoreal-2.0.ckpt)
### Safetensors
[Download dreamlike-photoreal-2.0.safetensors (2.13GB)](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/dreamlike-photoreal-2.0.safetensors)
### ๐งจ Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "dreamlike-art/dreamlike-photoreal-2.0"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "photo, a church in the middle of a field of crops, bright cinematic lighting, gopro, fisheye lens"
image = pipe(prompt).images[0]
image.save("./result.jpg")
```
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/church.jpg" style="max-width: 640px;" width="100%"/>
# License
This model is licesed under a **modified** CreativeML OpenRAIL-M license.
- **You are not allowed to host, finetune, or do inference with the model or its derivatives on websites/apps/etc. If you want to, please email us at [email protected]**
- **You are free to host the model card and files (Without any actual inference or finetuning) on both commercial and non-commercial websites/apps/etc. Please state the full model name (Dreamlike Photoreal 2.0) and include the license as well as a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0)**
- **You are free to use the outputs (images) of the model for commercial purposes in teams of 10 or less**
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content
- The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
- You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the **modified** CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/blob/main/LICENSE.md
|
iamannika/bert-finetuned-squad | iamannika | 2023-02-09T20:19:14Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-09T06:11:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Rolo/ppo-SnowballTarget2 | Rolo | 2023-02-09T20:15:29Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-09T20:15:19Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Rolo/ppo-SnowballTarget2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
jamesdolezal/CTransPath | jamesdolezal | 2023-02-09T19:17:09Z | 0 | 2 | null | [
"license:gpl-3.0",
"region:us"
]
| null | 2023-02-09T19:10:23Z | ---
license: gpl-3.0
---
[UNOFFICIAL]
This is the pretrained CTransPath model that accompanies the manuscript Transformer-based Unsupervised Contrastive Learning for Histopathological Image Classification, published by Xiyue Wang *et al* in Medical Image Analysis (October 2022, DOI: https://doi.org/10.1016/j.media.2022.102559)
This model has been uploaded to HuggingFace for easier sharing, but has not been verified by the original authors and is in no way affiliated with the original authors.
The official pretrained model is available on the official GitHub repository (https://github.com/Xiyue-Wang/TransPath) and Google Drive (https://drive.google.com/file/d/1DoDx_70_TLj98gTf6YTXnu4tFhsFocDX/view?usp=sharing). The license as included in the original repository is GPL-3.0.
|
lmqg/flan-t5-small-squad-ae | lmqg | 2023-02-09T19:16:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"answer extraction",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-09T19:14:57Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qg_squad
pipeline_tag: text2text-generation
tags:
- answer extraction
widget:
- text: "extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress."
example_title: "Answering Extraction Example 1"
- text: "extract answers: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress. <hl>"
example_title: "Answering Extraction Example 2"
model-index:
- name: lmqg/flan-t5-small-squad-ae
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_squad
type: default
args: default
metrics:
- name: BLEU4 (Answer Extraction)
type: bleu4_answer_extraction
value: 34.6
- name: ROUGE-L (Answer Extraction)
type: rouge_l_answer_extraction
value: 67.61
- name: METEOR (Answer Extraction)
type: meteor_answer_extraction
value: 42.59
- name: BERTScore (Answer Extraction)
type: bertscore_answer_extraction
value: 91.1
- name: MoverScore (Answer Extraction)
type: moverscore_answer_extraction
value: 80.54
- name: AnswerF1Score (Answer Extraction)
type: answer_f1_score__answer_extraction
value: 68.13
- name: AnswerExactMatch (Answer Extraction)
type: answer_exact_match_answer_extraction
value: 55.83
---
# Model Card of `lmqg/flan-t5-small-squad-ae`
This model is fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) for answer extraction on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/flan-t5-small](https://huggingface.co/google/flan-t5-small)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/flan-t5-small-squad-ae")
# model prediction
answers = model.generate_a("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/flan-t5-small-squad-ae")
output = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.")
```
## Evaluation
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/flan-t5-small-squad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:---------------------------------------------------------------|
| AnswerExactMatch | 55.83 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| AnswerF1Score | 68.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| BERTScore | 91.1 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 48.25 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 43.39 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 38.64 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 34.6 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 42.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 80.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 67.61 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_sentence']
- output_types: ['answer']
- prefix_types: ['ae']
- model: google/flan-t5-small
- max_length: 512
- max_length_output: 32
- epoch: 8
- batch: 64
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 1
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/flan-t5-small-squad-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
atorre/poca-SoccerTwos-20M | atorre | 2023-02-09T19:15:25Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-09T19:15:15Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: atorre/poca-SoccerTwos-20M
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
figfig/whisper-small-en | figfig | 2023-02-09T19:10:50Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"en",
"dataset:figfig/restaurant_order_test",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-06T14:06:09Z | ---
language:
- en
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- figfig/restaurant_order_test
metrics:
- wer
model-index:
- name: restaurant_test_model
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: test_data
type: figfig/restaurant_order_test
args: 'config: en, split: test'
metrics:
- name: Wer
type: wer
value: 78.57142857142857
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# restaurant_test_model
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the test_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5435
- Wer: 78.5714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 10.0 | 10 | 2.2425 | 7.1429 |
| No log | 20.0 | 20 | 0.6651 | 0.0 |
| 2.4375 | 30.0 | 30 | 0.5776 | 35.7143 |
| 2.4375 | 40.0 | 40 | 0.5435 | 78.5714 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mjaydenkim/autotrain-ma-detection-test-3372892714 | mjaydenkim | 2023-02-09T19:08:48Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:mjaydenkim/autotrain-data-ma-detection-test",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-09T19:07:39Z | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- mjaydenkim/autotrain-data-ma-detection-test
co2_eq_emissions:
emissions: 1.2555854454965398
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 3372892714
- CO2 Emissions (in grams): 1.2556
## Validation Metrics
- Loss: 0.153
- Accuracy: 0.941
- Precision: 0.892
- Recall: 0.966
- AUC: 0.988
- F1: 0.928
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/mjaydenkim/autotrain-ma-detection-test-3372892714
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mjaydenkim/autotrain-ma-detection-test-3372892714", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mjaydenkim/autotrain-ma-detection-test-3372892714", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
fermaat/a2c-AntBulletEnv-v0 | fermaat | 2023-02-09T19:07:23Z | 6 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-09T19:06:04Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 2275.50 +/- 137.45
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
figfig/restaurant_local_test_model | figfig | 2023-02-09T18:43:14Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-09T17:05:39Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
niv-al/sqt5-small | niv-al | 2023-02-09T18:03:31Z | 103 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"sq",
"en",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-08T20:42:37Z | ---
license: openrail
language:
- sq
- en
--- |
yujiepan/bert-base-uncased-sst2-unstructured80-PTQ | yujiepan | 2023-02-09T17:57:58Z | 32 | 0 | transformers | [
"transformers",
"pytorch",
"openvino",
"bert",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2023-02-06T20:11:20Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: bert-base-uncased-sst2-unstructured80-PTQ
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2-unstructured80-PTQ
This model conducts simple post training quantization of [yujiepan/bert-base-uncased-sst2-unstructured-sparsity-80](https://huggingface.co/yujiepan/bert-base-uncased-sst2-unstructured-sparsity-80) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- torch loss: 0.4029
- torch accuracy: 0.9128
- OpenVINO IR accuracy: 0.9117
- Sparsity in transformer block linear layers: 0.80
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- num_epochs: 12.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
sh0xb0x/avatarbitch | sh0xb0x | 2023-02-09T17:56:12Z | 32 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-09T17:54:40Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: avatar101
---
### AVATARBITCH Dreambooth model trained by sh0xb0x with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
avatar101 (use that on your prompt)

|
yujiepan/bert-base-uncased-sst2-PTQ | yujiepan | 2023-02-09T17:51:33Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"openvino",
"bert",
"generated_from_trainer",
"en",
"dataset:glue",
"endpoints_compatible",
"region:us"
]
| null | 2023-02-09T17:38:16Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: bert-base-uncased-sst2-PTQ
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2-PTQ
This model conducts simple post training quantization of [textattack/bert-base-uncased-SST-2](https://huggingface.co/textattack/bert-base-uncased-SST-2) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- torch loss: 0.2140
- torch accuracy: 0.9243
- OpenVINO IR accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Subsets and Splits