modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 06:27:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 06:27:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
UltronAI1/helptechno | UltronAI1 | 2023-02-07T18:58:45Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-02-07T18:56:10Z | ---
license: afl-3.0
---Chatgpt/run/computershelp101.info
|
Ramuvannela/bert-fine-tuned-cola | Ramuvannela | 2023-02-07T18:58:17Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-04T17:45:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6107419227947289
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8073
- Matthews Correlation: 0.6107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4681 | 1.0 | 1069 | 0.5613 | 0.4892 |
| 0.321 | 2.0 | 2138 | 0.6681 | 0.5851 |
| 0.1781 | 3.0 | 3207 | 0.8073 | 0.6107 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
summervent/speller-t5-90 | summervent | 2023-02-07T18:57:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-02T23:43:26Z | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: speller-t5-90
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speller-t5-90
This model is a fine-tuned version of [sberbank-ai/ruT5-base](https://huggingface.co/sberbank-ai/ruT5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1486
- Rouge1: 19.3503
- Rouge2: 8.3898
- Rougel: 19.4209
- Rougelsum: 19.4915
- Gen Len: 41.3136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 0.3435 | 0.03 | 500 | 0.2100 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.4492 |
| 0.3245 | 0.07 | 1000 | 0.2102 | 19.5975 | 8.7571 | 19.7034 | 19.774 | 41.1949 |
| 0.3777 | 0.1 | 1500 | 0.2010 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.0 |
| 0.3643 | 0.14 | 2000 | 0.1980 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.0593 |
| 0.3212 | 0.17 | 2500 | 0.1986 | 19.209 | 8.2062 | 19.2797 | 19.2797 | 41.1525 |
| 0.4181 | 0.2 | 3000 | 0.1896 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 42.2373 |
| 0.3175 | 0.24 | 3500 | 0.1879 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.4576 |
| 0.3399 | 0.27 | 4000 | 0.1838 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.1102 |
| 0.314 | 0.31 | 4500 | 0.1837 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.0339 |
| 0.3063 | 0.34 | 5000 | 0.1796 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 40.9407 |
| 0.3434 | 0.38 | 5500 | 0.1769 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 40.8814 |
| 0.376 | 0.41 | 6000 | 0.1790 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.0593 |
| 0.3355 | 0.44 | 6500 | 0.1735 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.4153 |
| 0.3181 | 0.48 | 7000 | 0.1665 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.0508 |
| 0.3017 | 0.51 | 7500 | 0.1701 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.2881 |
| 0.2953 | 0.55 | 8000 | 0.1664 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.2458 |
| 0.2711 | 0.58 | 8500 | 0.1664 | 19.5975 | 8.7571 | 19.7034 | 19.774 | 41.4068 |
| 0.3661 | 0.61 | 9000 | 0.1626 | 19.5975 | 8.7571 | 19.7034 | 19.774 | 41.2797 |
| 0.273 | 0.65 | 9500 | 0.1585 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.3051 |
| 0.3346 | 0.68 | 10000 | 0.1627 | 19.5975 | 8.7571 | 19.7034 | 19.774 | 41.2797 |
| 0.2529 | 0.72 | 10500 | 0.1590 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.2627 |
| 0.2926 | 0.75 | 11000 | 0.1601 | 19.5975 | 8.7571 | 19.7034 | 19.774 | 41.2712 |
| 0.2677 | 0.78 | 11500 | 0.1551 | 19.5975 | 8.7571 | 19.7034 | 19.774 | 41.2797 |
| 0.2746 | 0.82 | 12000 | 0.1570 | 19.5975 | 8.7571 | 19.7034 | 19.774 | 41.1186 |
| 0.2494 | 0.85 | 12500 | 0.1513 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.2373 |
| 0.2834 | 0.89 | 13000 | 0.1506 | 19.5975 | 8.7571 | 19.7034 | 19.774 | 41.2458 |
| 0.2646 | 0.92 | 13500 | 0.1512 | 19.5975 | 8.7571 | 19.7034 | 19.774 | 41.3729 |
| 0.2782 | 0.95 | 14000 | 0.1528 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.3644 |
| 0.2954 | 0.99 | 14500 | 0.1486 | 19.3503 | 8.3898 | 19.4209 | 19.4915 | 41.3136 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.7.1+cu110
- Datasets 2.9.0
- Tokenizers 0.13.2
|
austinmw/q-FrozenLake-v1-4x4-noSlippery | austinmw | 2023-02-07T18:47:12Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T18:47:08Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="austinmw/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Hatman/ddpm-celebahq-finetuned-few-shot-universe | Hatman | 2023-02-07T18:46:22Z | 10 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2023-02-07T18:46:12Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
A model from google/ddpm-celebahq-256 finetuned using the huggan/few-shot-universe dataset
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Hatman/ddpm-celebahq-finetuned-few-shot-universe')
image = pipeline().images[0]
image
```
|
sb3/ppo-CarRacing-v0 | sb3 | 2023-02-07T18:27:18Z | 19 | 0 | stable-baselines3 | [
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T18:24:16Z | ---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
metrics:
- type: mean_reward
value: 174.99 +/- 100.17
name: mean_reward
verified: false
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v0 -orga sb3 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env CarRacing-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env CarRacing-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper',
[{'rl_zoo3.wrappers.FrameSkip': {'skip': 2}},
{'gym.wrappers.resize_observation.ResizeObservation': {'shape': 64}},
{'gym.wrappers.gray_scale_observation.GrayScaleObservation': {'keep_dim': True}}]),
('frame_stack', 2),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 'lin_1e-4'),
('max_grad_norm', 0.5),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 512),
('n_timesteps', 4000000.0),
('normalize', "{'norm_obs': False, 'norm_reward': True}"),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(log_std_init=-2, ortho_init=False, activation_fn=nn.GELU, '
'net_arch=dict(pi=[256], vf=[256]), )'),
('sde_sample_freq', 4),
('use_sde', True),
('vf_coef', 0.5),
('normalize_kwargs', {'norm_obs': False, 'norm_reward': False})])
```
|
HealthTeam/mt5-small-finetuned-MultiHead-230207 | HealthTeam | 2023-02-07T18:22:47Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-07T04:31:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-small-finetuned-MultiHead-230207
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-MultiHead-230207
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2185
- Bleu: 14.3905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:------:|:---------------:|:-------:|
| 3.0155 | 1.0 | 67222 | 2.3749 | 11.2986 |
| 2.7777 | 2.0 | 134444 | 2.2518 | 13.5854 |
| 2.7531 | 3.0 | 201666 | 2.2185 | 14.3905 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
lnros/poca-SoccerTwos | lnros | 2023-02-07T18:19:36Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-07T18:19:27Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: lnros/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Periramm/dqn-SpaceInvadersNoFrameskip-v4 | Periramm | 2023-02-07T18:08:45Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-28T08:46:00Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 481.00 +/- 176.15
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Periramm -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Periramm -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Periramm
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
dawokim/pegasus-samsum | dawokim | 2023-02-07T17:54:32Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-07T16:11:25Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Framework versions
- Transformers 4.26.0
- Pytorch 1.10.1+cu113
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Nikitarabine/G | Nikitarabine | 2023-02-07T17:54:05Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"aa",
"dataset:fka/awesome-chatgpt-prompts",
"license:openrail",
"region:us"
]
| text-to-image | 2023-02-07T17:52:15Z | ---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
language:
- aa
metrics:
- code_eval
library_name: diffusers
pipeline_tag: text-to-image
--- |
sd-concepts-library/matrix | sd-concepts-library | 2023-02-07T17:53:20Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2023-02-04T19:36:30Z | ---
license: mit
---
### matrix on Stable Diffusion
This is the `<hatman-matrix>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
### Troubleshooting
This concept was trained using "CompVis/stable-diffusion-v1-4" which is linked to in the inference notebook for concepts and has a tensor length of [756]. The notebook to train concepts links to "stabilityai/stable-diffusion-2" which has a tensor length of [1024].
Here is the new concept you will be able to use as a `style`:



|
Mandoryan/DQN-LunarLander-v2 | Mandoryan | 2023-02-07T17:47:34Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T16:47:44Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 134.93 +/- 118.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
qgallouedec/ppo-MiniGrid-DoorKey-5x5-v0 | qgallouedec | 2023-02-07T17:44:54Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"MiniGrid-DoorKey-5x5-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T17:44:42Z | ---
library_name: stable-baselines3
tags:
- MiniGrid-DoorKey-5x5-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MiniGrid-DoorKey-5x5-v0
type: MiniGrid-DoorKey-5x5-v0
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **MiniGrid-DoorKey-5x5-v0**
This is a trained model of a **PPO** agent playing **MiniGrid-DoorKey-5x5-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-DoorKey-5x5-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-DoorKey-5x5-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env MiniGrid-DoorKey-5x5-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo --env MiniGrid-DoorKey-5x5-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env MiniGrid-DoorKey-5x5-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env MiniGrid-DoorKey-5x5-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper', 'gym_minigrid.wrappers.FlatObsWrapper'),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.00025),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 128),
('n_timesteps', 100000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
pfunk/Pong-v4-DQPN_p30_pt0.1-seed1 | pfunk | 2023-02-07T17:19:55Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pong-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T17:19:34Z | ---
tags:
- Pong-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-v4
type: Pong-v4
metrics:
- type: mean_reward
value: 0.70 +/- 4.71
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p30_pt0.1.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p30_pt0.1]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p30_pt0.1 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_pt0.1-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_pt0.1-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_pt0.1-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p30_pt0.1 --start-policy-f 30000 --end-policy-f 30000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 30000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p30_pt0.1',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 0.1,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 30000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
palinachka/ya | palinachka | 2023-02-07T17:14:02Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
]
| null | 2023-02-07T17:14:02Z | ---
license: bigscience-bloom-rail-1.0
---
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_data_aug_wnli_256 | gokuls | 2023-02-07T17:13:43Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-07T17:04:39Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_data_aug_wnli_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.15492957746478872
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_data_aug_wnli_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5279
- Accuracy: 0.1549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3422 | 1.0 | 218 | 0.5279 | 0.1549 |
| 0.305 | 2.0 | 436 | 0.5961 | 0.1268 |
| 0.291 | 3.0 | 654 | 0.6364 | 0.0845 |
| 0.2816 | 4.0 | 872 | 0.6604 | 0.0986 |
| 0.2744 | 5.0 | 1090 | 0.6627 | 0.0845 |
| 0.2686 | 6.0 | 1308 | 0.6618 | 0.0986 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
LarryAIDraw/sinonGGO_sinonGGO | LarryAIDraw | 2023-02-07T17:12:22Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-07T17:10:41Z | ---
license: creativeml-openrail-m
---
|
Minghai/ivorish | Minghai | 2023-02-07T17:05:21Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-07T17:05:21Z | ---
license: creativeml-openrail-m
---
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_data_aug_stsb_256 | gokuls | 2023-02-07T17:03:40Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-07T16:11:58Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_data_aug_stsb_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.17779903983231324
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_data_aug_stsb_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4500
- Pearson: 0.1761
- Spearmanr: 0.1778
- Combined Score: 0.1770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 0.5832 | 1.0 | 1259 | 1.5244 | 0.1737 | 0.1803 | 0.1770 |
| 0.2202 | 2.0 | 2518 | 1.4500 | 0.1761 | 0.1778 | 0.1770 |
| 0.1249 | 3.0 | 3777 | 1.4720 | 0.1743 | 0.1782 | 0.1762 |
| 0.0822 | 4.0 | 5036 | 1.5790 | 0.1581 | 0.1658 | 0.1619 |
| 0.0611 | 5.0 | 6295 | 1.4750 | 0.1850 | 0.1905 | 0.1878 |
| 0.0477 | 6.0 | 7554 | 1.5776 | 0.1612 | 0.1694 | 0.1653 |
| 0.0394 | 7.0 | 8813 | 1.5512 | 0.1648 | 0.1694 | 0.1671 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
DL82/remylacroix | DL82 | 2023-02-07T16:57:09Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-07T16:55:36Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: remylacroix
---
### remylacroix Dreambooth model trained by DL82 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
remylacroix (use that on your prompt)

|
sd-concepts-library/chaaya-2-0 | sd-concepts-library | 2023-02-07T16:54:10Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2023-02-07T16:54:03Z | ---
license: mit
---
### Chaaya 2.0 on Stable Diffusion
This is the `<skschaaya>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:












|
virto/mt5-small-finetuned-rabbi-kook | virto | 2023-02-07T16:48:26Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-02-07T15:10:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-rabbi-kook
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-rabbi-kook
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 223 | 6.4428 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.11.0
|
frangiral/Taxi-v3-Try1 | frangiral | 2023-02-07T16:13:39Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T16:13:37Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-Try1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="frangiral/Taxi-v3-Try1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jannikskytt/Reinforce-PixelCopter | jannikskytt | 2023-02-07T16:13:10Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T16:13:06Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 14.20 +/- 8.35
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
RMAV/taxi-driver | RMAV | 2023-02-07T16:10:54Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T15:49:30Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-driver
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="RMAV/taxi-driver", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Belldofers/BelldofersTestModel | Belldofers | 2023-02-07T16:08:27Z | 0 | 0 | allennlp | [
"allennlp",
"question-answering",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:openwebtext",
"dataset:Aunsiels/Quasimodo",
"doi:10.57967/hf/0334",
"region:us"
]
| question-answering | 2023-02-07T15:11:28Z | ---
datasets:
- fka/awesome-chatgpt-prompts
- openwebtext
- Aunsiels/Quasimodo
pipeline_tag: question-answering
library_name: allennlp
metrics:
- accuracy
--- |
RMAV/q-FrozenLake-v1-4x4-noSlippery | RMAV | 2023-02-07T16:08:22Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T15:35:10Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="RMAV/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vvn0/ppo-SnowballTarget | vvn0 | 2023-02-07T15:52:25Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-07T15:52:19Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: vvn0/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kwangjin/novel_lora | kwangjin | 2023-02-07T15:48:00Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-02-07T14:52:28Z |
---
license: creativeml-openrail-m
base_model: ../../../diffusers_ckpts/anythingv3/
instance_prompt: a photo of sks person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - kwangjin/novel_lora
These are LoRA adaption weights for ../../../diffusers_ckpts/anythingv3/. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




|
pearsonkyle/ArtPrompter | pearsonkyle | 2023-02-07T15:46:13Z | 27 | 2 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-01-18T05:09:34Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: ArtPrompter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# [ArtPrompter](https://pearsonkyle.github.io/Art-Prompter/)
A [gpt2](https://huggingface.co/gpt2) powered predictive algorithm for making descriptive text prompts for A.I. image generators (e.g. MidJourney, Stable Diffusion, ArtBot, etc). The model was trained on a custom dataset containing 666K unique prompts from MidJourney. Simply start a prompt and let the algorithm suggest ways to finish it.

[](https://colab.research.google.com/drive/1HQOtD2LENTeXEaxHUfIhDKUaPIGd6oTR?usp=sharing)
```python
from transformers import pipeline
prompter = pipeline('text-generation',model='pearsonkyle/ArtPrompter', tokenizer='gpt2')
texts = prompter('A portal to a galaxy, view with', max_length=30, num_return_sequences=5)
for i in range(5):
print(texts[i]['generated_text']+'\n')
```
## Intended uses & limitations
Build sick prompts and lots of them.. use it to [make animations](https://colab.research.google.com/drive/1Ooe7c87xGMa9oG5BDrFVzYqJLvnoKcyZ?usp=sharing) or a discord bot that can interact with MidJourney.
[](https://discord.gg/3S8Taqa2Xy)
## Examples
- *The entire universe is a simulation,a confessional with a smiling guy fawkes mask, symmetrical, inviting,hyper realistic*
- *a pug disguised as a teacher. Setting is a class room*
- *I wish I had an angel For one moment of love I wish I had your angel Your Virgin Mary undone Im in love with my desire Burning angelwings to dust*
- *The heart of a galaxy, surrounded by stars, magnetic fields, big bang, cinestill 800T,black background, hyper detail, 8k, black*
## Training procedure
~30 hours of finetune on RTX3070 with 666K unique prompts
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Tokenizers 0.13.2 |
BhavyaMuni/model-v4 | BhavyaMuni | 2023-02-07T15:44:43Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-07T14:57:12Z | ---
tags:
- generated_from_trainer
model-index:
- name: model-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-v4
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001372
- train_batch_size: 8
- eval_batch_size: 8
- seed: 448538920
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6681 | 1.0 | 217 | 1.4124 |
| 1.7025 | 2.0 | 434 | 1.4686 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Sakil/bertfined_finetunedmodel_fakenews | Sakil | 2023-02-07T15:31:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-02-07T15:21:22Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
--- |
LouisDT/videomae-base-finetuned-ucf1012bovi-subset | LouisDT | 2023-02-07T15:22:22Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2023-02-02T16:21:05Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf1012bovi-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf1012bovi-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5322
- Accuracy: 0.7812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5824 | 0.25 | 30 | 0.5322 | 0.7812 |
| 0.6914 | 1.25 | 60 | 0.5260 | 0.7812 |
| 0.5257 | 2.25 | 90 | 0.5900 | 0.7812 |
| 0.6191 | 3.25 | 120 | 0.5305 | 0.7812 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Krud/microsoft_xtremedistil-l12-h384-uncased-TriviaQA | Krud | 2023-02-07T15:18:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-07T15:04:45Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: result
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [microsoft/xtremedistil-l12-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l12-h384-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
franjamonga/translate | franjamonga | 2023-02-07T15:10:58Z | 5 | 3 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-02-07T15:05:42Z | ---
language:
- es
- en
tags:
- translation
license: apache-2.0
---
### spa-eng
* source group: Spanish
* target group: English
* OPUS readme: [spa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md)
* model: transformer
* source language(s): spa
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip)
* test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt)
* test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-spaeng.spa.eng | 30.6 | 0.570 |
| news-test2008-spaeng.spa.eng | 27.9 | 0.553 |
| newstest2009-spaeng.spa.eng | 30.4 | 0.572 |
| newstest2010-spaeng.spa.eng | 36.1 | 0.614 |
| newstest2011-spaeng.spa.eng | 34.2 | 0.599 |
| newstest2012-spaeng.spa.eng | 37.9 | 0.624 |
| newstest2013-spaeng.spa.eng | 35.3 | 0.609 |
| Tatoeba-test.spa.eng | 59.6 | 0.739 |
### System Info:
- hf_name: spa-eng
- source_languages: spa
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'en']
- src_constituents: {'spa'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-eng/opus-2020-08-18.test.txt
- src_alpha3: spa
- tgt_alpha3: eng
- short_pair: es-en
- chrF2_score: 0.7390000000000001
- bleu: 59.6
- brevity_penalty: 0.9740000000000001
- ref_len: 79376.0
- src_name: Spanish
- tgt_name: English
- train_date: 2020-08-18 00:00:00
- src_alpha2: es
- tgt_alpha2: en
- prefer_old: False
- long_pair: spa-eng
- helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82
- transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9
- port_machine: brutasse
- port_time: 2020-08-24-18:20 |
VladDe/Reinforce-copter | VladDe | 2023-02-07T15:05:18Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T14:50:18Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-copter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 9.50 +/- 6.52
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LLukas22/bert-base-uncased-embedding-step-scheduler | LLukas22 | 2023-02-07T15:02:27Z | 4 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tensorboard",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"generated_from_trainer",
"dataset:squad",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-02-07T14:02:57Z | ---
license: cc-by-nc-4.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- generated_from_trainer
datasets:
- squad
---
# bert-base-uncased-embedding-step-scheduler
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [squad](https://huggingface.co/datasets/squad) dataset.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('LLukas22/bert-base-uncased-embedding-step-scheduler')
embeddings = model.encode(sentences)
print(embeddings)
```
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2E-05
- per device batch size: 26
- effective batch size: 26
- seed: 42
- optimizer: AdamW with betas (0.9,0.999) and eps 1E-08
- weight decay: 1E-02
- D-Adaptation: False
- Warmup: False
- number of epochs: 3
- mixed_precision_training: bf16
## Training results
| Epoch | Train Loss | Validation Loss |
| ----- | ---------- | --------------- |
| 0 | 0.0647 | 0.0876 |
| 1 | 0.0328 | 0.0826 |
| 2 | 0.0298 | 0.082 |
## Evaluation results
| Epoch | top_1 | top_3 | top_5 | top_10 | top_25 |
| ----- | ----- | ----- | ----- | ----- | ----- |
| 0 | 0.586 | 0.778 | 0.843 | 0.911 | 0.968 |
| 1 | 0.596 | 0.792 | 0.853 | 0.917 | 0.969 |
| 2 | 0.595 | 0.794 | 0.854 | 0.917 | 0.97 |
## Framework versions
- Transformers: 4.25.1
- PyTorch: 1.13.1
- PyTorch Lightning: 1.8.6
- Datasets: 2.7.1
- Tokenizers: 0.12.1
- Sentence Transformers: 2.2.2
## Additional Information
This model was trained as part of my Master's Thesis **'Evaluation of transformer based language models for use in service information systems'**. The source code is available on [Github](https://github.com/LLukas22/Master).
|
acampillos/q-FrozenLake-v1-4x4-noSlippery | acampillos | 2023-02-07T14:59:11Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T14:59:08Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="acampillos/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NihiLicA/q-FrozenLake-v1-4x4-noSlippery | NihiLicA | 2023-02-07T14:57:47Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T14:57:44Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NihiLicA/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Liapunov/a2c-AntBulletEnv-v0 | Liapunov | 2023-02-07T14:56:46Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T14:55:41Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 717.26 +/- 62.68
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Snim/taxi_DRLCourse | Snim | 2023-02-07T14:33:47Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T14:33:38Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_DRLCourse
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Snim/taxi_DRLCourse", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Mayhem50/sgpt-bloom-560m-nli-v3 | Mayhem50 | 2023-02-07T14:19:06Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bloom",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-02-07T07:43:45Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3076 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 150, 'do_lower_case': False}) with Transformer model: BloomModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
zuzhe/Mecha-model | zuzhe | 2023-02-07T14:15:56Z | 0 | 27 | null | [
"license:openrail",
"region:us"
]
| null | 2023-02-06T13:04:10Z | ---
license: openrail
---
The mecha model needs low cfg, such as 3.5-7. Because the training set has only the upper body, it can only be partially stable,
Forgive me for not doing well,
Thanks to QQ friends for their long-term help and teaching. Thank you again
Thank Mr. Lin for his training set
BY昂扬
Use vae with high saturation
Real mechanical texture
Realistic
Metal details
Dirt, dust, damage and wear, battle damage
Mecha model










|
nlpaumom/tinybert_hotpotqa | nlpaumom | 2023-02-07T14:08:21Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-03T13:27:36Z | ---
tags:
- generated_from_trainer
model-index:
- name: result
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [huawei-noah/TinyBERT_General_6L_768D](https://huggingface.co/huawei-noah/TinyBERT_General_6L_768D) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
azaazato/ppo-Huggy | azaazato | 2023-02-07T13:57:08Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-02-07T13:57:01Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: azaazato/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hakuto/sybian_LoRA | hakuto | 2023-02-07T13:56:52Z | 0 | 7 | null | [
"region:us"
]
| null | 2023-02-05T12:29:34Z | 1girl, riding on a sybian か
woman riding on a sybian 辺りで出せると思います。
メタデータの中に使ったキャプションが入ってます。 |
fathyshalab/clinic-kitchen_and_dining-roberta-domain-adaptation | fathyshalab | 2023-02-07T13:49:04Z | 9 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-02-07T13:48:46Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# fathyshalab/clinic-kitchen_and_dining-roberta-domain-adaptation
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/clinic-kitchen_and_dining-roberta-domain-adaptation")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
quackquack22/Gloria_Sato_LoRa | quackquack22 | 2023-02-07T13:42:46Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-07T13:41:07Z | ---
license: creativeml-openrail-m
---
You may put 'molly mcgee' to prompt in Stable Diffusion WebUI.
I make this with the model abyssOrangeMix2. |
jordiclive/flan-t5-11b-summarizer-filtered | jordiclive | 2023-02-07T13:13:59Z | 127 | 16 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"extractive",
"summary",
"abstractive",
"multi-task",
"document summary",
"en",
"dataset:jordiclive/scored_summarization_datasets",
"dataset:jordiclive/wikipedia-summary-dataset",
"license:apache-2.0",
"license:bsd-3-clause",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| summarization | 2023-02-07T12:05:57Z | ---
language:
- en
license:
- apache-2.0
- bsd-3-clause
tags:
- summarization
- extractive
- summary
- abstractive
- multi-task
- document summary
datasets:
- jordiclive/scored_summarization_datasets
- jordiclive/wikipedia-summary-dataset
metrics:
- rouge
---
# Multi-purpose Summarizer (Fine-tuned 11B google/flan-t5-xxl on several Summarization datasets)
<a href="https://colab.research.google.com/drive/1fNOfy7oHYETI_KzJSz8JrhYohFBBl0HY">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
A fine-tuned version of [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl) on various summarization datasets (xsum, wikihow, cnn_dailymail/3.0.0, samsum, scitldr/AIC, billsum, TLDR, wikipedia-summary)
70% of the data was also filtered with the use of the [contriever](https://github.com/facebookresearch/contriever) with a cosine similarity between text and summary of 0.6 as threshold.
Goal: a model that can be used for a general-purpose summarizer for academic and general usage. Control over the type of summary can be given by varying the instruction prepended to the source document. The result works well on lots of text, although trained with a max source length of 512 tokens and 150 max summary length.
---
## Usage
Check the colab notebook for desired usage.
**The model expects a prompt prepended to the source document to indicate the type of summary**, this model was trained with a large (100s) variety of prompts:
```
.
example_prompts = {
"social": "Produce a short summary of the following social media post:",
"ten": "Summarize the following article in 10-20 words:",
"5": "Summarize the following article in 0-5 words:",
"100": "Summarize the following article in about 100 words:",
"summary": "Write a ~ 100 word summary of the following text:",
"short": "Provide a short summary of the following article:",
}
```
The model has also learned for the length of the summary to be specified in words by a range "x-y words" or e.g. "~/approximately/about/ x words."
Prompts should be formatted with a colon at the end so that the input to the model is formatted as e.g. "Summarize the following: \n\n {input_text}"
After `pip install transformers` run the following code:
This pipeline will run slower and not have some of the tokenization parameters as the colab.
```python
from transformers import pipeline
summarizer = pipeline("summarization", "jordiclive/flan-t5-11b-summarizer-filtered", torch_dtype=torch.bfloat16)
raw_document = 'You must be 18 years old to live or work in New York State...'
prompt = "Summarize the following article in 10-20 words:"
results = summarizer(
f"{prompt} \n\n {raw_document}",
num_beams=5,
min_length=5,
no_repeat_ngram_size=3,
truncation=True,
max_length=512,
)
```
---
## Training procedure
- Training was done in BF16, deepspeed stage 2 with CPU offload for 1 epoch with val loss monitored.
## Hardware
- GPU count 8 NVIDIA A100-SXM4-80GB
- CPU count 48
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- effective_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- warmup_steps: 2000
- num_epochs: 4
### Framework versions
- Transformers 4.24.0
- Pytorch 1.9.1+cu111
- Deepspeed 0.7.4
- Pytorch-lightning 1.8.1 |
zlicastro/zl-poca-SoccerTwos | zlicastro | 2023-02-07T13:04:15Z | 15 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-07T13:04:07Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: zlicastro/zl-poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pneubauer/basic-Reinforce-Pixelcopter-PLE-v0 | pneubauer | 2023-02-07T12:59:03Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T12:58:54Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: basic-Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 16.30 +/- 10.95
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
facebook/mask2former-swin-large-coco-panoptic | facebook | 2023-02-07T12:46:36Z | 126,035 | 29 | transformers | [
"transformers",
"pytorch",
"mask2former",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2112.01527",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"region:us"
]
| image-segmentation | 2023-01-02T16:24:12Z | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- coco
widget:
- src: http://images.cocodataset.org/val2017/000000039769.jpg
example_title: Cats
---
# Mask2Former
Mask2Former model trained on COCO panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on COCO panoptic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-coco-panoptic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-coco-panoptic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). |
yizhangliu/poca-SoccerTwos-v3 | yizhangliu | 2023-02-07T12:35:23Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-07T12:35:18Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: yizhangliu/poca-SoccerTwos-v3
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
JFelixFF/Test | JFelixFF | 2023-02-07T12:32:48Z | 0 | 0 | null | [
"license:cc-by-nc-sa-2.0",
"region:us"
]
| null | 2023-02-07T12:32:48Z | ---
license: cc-by-nc-sa-2.0
---
|
cfalholt/PPO-PyramidsTraining | cfalholt | 2023-02-07T12:22:42Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-02-07T12:22:36Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: cfalholt/PPO-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pfunk/Pong-v4-DQPN_p2-seed1 | pfunk | 2023-02-07T12:22:25Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pong-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T12:22:05Z | ---
tags:
- Pong-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-v4
type: Pong-v4
metrics:
- type: mean_reward
value: 2.90 +/- 5.96
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p2.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p2]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p2 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p2-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p2-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p2-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p2 --start-policy-f 2000 --end-policy-f 2000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 2000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p2',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 2000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
vvn0/Reinforce-CartPole-v1 | vvn0 | 2023-02-07T12:14:03Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T12:13:55Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vaibhav9/mini5-qa | vaibhav9 | 2023-02-07T12:09:15Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-02-07T12:07:52Z | ---
tags:
- generated_from_trainer
model-index:
- name: mini5-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini5-qa
This model is a fine-tuned version of [mrm8488/bert-mini-5-finetuned-squadv2](https://huggingface.co/mrm8488/bert-mini-5-finetuned-squadv2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 52 | 0.5957 |
| No log | 2.0 | 104 | 0.5762 |
| No log | 3.0 | 156 | 0.5918 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
yashas123/finetuning-sentiment-model | yashas123 | 2023-02-07T12:09:00Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-07T09:41:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8566666666666667
- name: F1
type: f1
value: 0.858085808580858
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7491
- Accuracy: 0.8567
- F1: 0.8581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mili7522/q-FrozenLake-v1-4x4-noSlippery | mili7522 | 2023-02-07T12:06:12Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T12:06:10Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mili7522/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
pfunk/Pong-v4-DQPN_p10_e0.50-seed1 | pfunk | 2023-02-07T12:05:25Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pong-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T12:05:05Z | ---
tags:
- Pong-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-v4
type: Pong-v4
metrics:
- type: mean_reward
value: 3.90 +/- 7.75
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p10_e0.50.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p10_e0.50]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p10_e0.50 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_e0.50-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_e0.50-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_e0.50-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p10_e0.50 --start-policy-f 10000 --end-policy-f 1000 --evaluation-fraction 0.50 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.5,
'exp_name': 'DQPN_p10_e0.50',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 10000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
Erwanlbv/Reinforce-hPix-4.15 | Erwanlbv | 2023-02-07T12:03:13Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T12:03:06Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-hPix-4.15
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 21.67 +/- 20.71
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vaibhav9/DistilBert-qa | vaibhav9 | 2023-02-07T12:00:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-12-28T04:59:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-qa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 52 | 3.6935 |
| No log | 2.0 | 104 | 3.1373 |
| No log | 3.0 | 156 | 3.0243 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mallycrip/SpaceInvadersNoFrameskip-v4-dqn_atari_e-2 | mallycrip | 2023-02-07T11:59:36Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T11:59:29Z | ---
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 705.50 +/- 237.55
name: mean_reward
verified: false
---
# DQN **SpaceInvadersNoFrameskip-v4**
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 100000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'env_id': 'SpaceInvadersNoFrameskip-v4',
'exp_name': 'dqn_atari_e',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'mallycrip',
'learning_rate': 0.0001,
'learning_starts': 80000,
'save_model': False,
'seed': 1,
'start_e': 1,
'target_network_frequency': 1000,
'tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': False,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
anuoluwa/dqn-SpaceInvadersNoFrameskip-v4 | anuoluwa | 2023-02-07T11:58:07Z | 9 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T11:57:29Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 374.00 +/- 214.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga anuoluwa -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga anuoluwa -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga anuoluwa
```
## Hyperparameters
```python
OrderedDict([('batch_size', 16),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 6),
('gradient_steps', 1),
('learning_rate', 0.03),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
justinheyne/Dj | justinheyne | 2023-02-07T11:55:50Z | 0 | 0 | null | [
"dataset:fka/awesome-chatgpt-prompts",
"license:openrail",
"region:us"
]
| null | 2023-02-07T11:54:58Z | ---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
metrics:
- accuracy
--- |
BachNgoH/ppo-Pyramids | BachNgoH | 2023-02-07T10:53:52Z | 9 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-02-07T10:53:47Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: BachNgoH/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
marcoyang/sherpa-ncnn-conv-emformer-transducer-small-2023-02-07 | marcoyang | 2023-02-07T10:45:18Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2023-02-07T10:38:01Z | ---
license: apache-2.0
---
This model is trained on `LibriSpeech` dataset and can only be used for English ASR.
It's a very small model, which means it is suitable for embedded devices. |
Classacre/classacre-solo-levelling-art-style-test | Classacre | 2023-02-07T10:37:54Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-07T10:33:29Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Classacre/solo-levelling-art-style-test Dreambooth model trained by Classacre with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:

|
moshew/distilbilstm-finetuned-sst-2-english | moshew | 2023-02-07T10:31:42Z | 0 | 2 | keras | [
"keras",
"tf-keras",
"region:us"
]
| null | 2022-12-20T06:21:53Z | ---
library_name: keras
---
x100 smaller with less than 0.5 accuracy drop vs. distilbert-base-uncased-finetuned-sst-2-english
## Model description
2 Layers Bilstm model finetuned on SST-2 and distlled from RoBERTa teacher
distilbert-base-uncased-finetuned-sst-2-english: 92.2 accuracy, 67M parameters
moshew/distilbilstm-finetuned-sst-2-english: 91.9 accuracy, 0.66M parameters
## How to get started with the model
Example on SST-2 test dataset classification:
```python
!pip install datasets
from datasets import load_dataset
import numpy as np
from sklearn.metrics import accuracy_score
from keras.preprocessing.text import Tokenizer
from keras.utils import pad_sequences
import tensorflow as tf
from huggingface_hub import from_pretrained_keras
from datasets import load_dataset
sst2 = load_dataset("SetFit/sst2")
augmented_sst2_dataset = load_dataset("jmamou/augmented-glue-sst2")
# Tokenize our training data
tokenizer = Tokenizer(num_words=10000)
tokenizer.fit_on_texts(augmented_sst2_dataset['train']['sentence'])
# Encode test data sentences into sequences
test_sequences = tokenizer.texts_to_sequences(sst2['test']['text'])
# Pad the test sequences
test_padded = pad_sequences(test_sequences, padding = 'post', truncating = 'post', maxlen=64)
reloaded_model = from_pretrained_keras('moshew/distilbilstm-finetuned-sst-2-english')
#Evaluate model on SST2 test data (GLUE)
pred=reloaded_model.predict(test_padded)
pred_bin = np.argmax(pred,1)
accuracy_score(pred_bin, sst2['test']['label'])
0.9187259747391543
reloaded_model.summary()
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 64)] 0
embedding (Embedding) (None, 64, 50) 500000
bidirectional (Bidirectiona (None, 64, 128) 58880
l)
bidirectional_1 (Bidirectio (None, 128) 98816
nal)
dropout (Dropout) (None, 128) 0
dense (Dense) (None, 2) 258
=================================================================
Total params: 657,954
Trainable params: 657,954
Non-trainable params: 0
_________________________________________________________________
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 0.0010000000474974513 |
| decay | 0.0 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
KarolK/distilbert-base-uncased-finetuned-emotion | KarolK | 2023-02-07T10:27:02Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-07T09:47:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.901
- name: F1
type: f1
value: 0.8975803523323151
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3399
- Accuracy: 0.901
- F1: 0.8976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5129 | 0.8465 | 0.8300 |
| 0.7331 | 2.0 | 250 | 0.3399 | 0.901 | 0.8976 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0
- Datasets 2.8.0
- Tokenizers 0.10.3
|
jannikskytt/dqn-SpaceInvadersNoFrameskip-v4 | jannikskytt | 2023-02-07T10:21:30Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T10:20:45Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 659.00 +/- 302.20
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jannikskytt -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jannikskytt -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jannikskytt
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
DigitalUmuganda/Kinyarwanda_YourTTS | DigitalUmuganda | 2023-02-07T10:19:35Z | 8 | 1 | transformers | [
"transformers",
"text-to-speech",
"rw",
"arxiv:2112.02418",
"endpoints_compatible",
"region:us"
]
| text-to-speech | 2023-02-02T23:33:43Z | ---
language:
- rw
pipeline_tag: text-to-speech
---
## Model Description
<!-- Provide a longer summary of what this model is. -->
This model is an end-to-end deep-learning-based Kinyarwanda Text-to-Speech (TTS). Due to its zero-shot learning capabilities, new voices can be introduced with 1min speech.
The model was trained using the Coqui's TTS library, and the YourTTS[1] architecture. It was trained on 67 hours of Kinyarwanda bible data, for 100 epochs.
## Data Sources
<!-- Provide the basic links for the model. -->
- **Audio data:** [www.faithcomesbyhearing.com, version -> Common Language Version audio Old Testament]
- **Text data:** [www.bible.com, version -> Bibiliya Ijambo ry'imana(BIR)(only the Old Testament was used)]
# Usage
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Install the Coqui's TTS library:
```
pip install git+https://github.com/coqui-ai/TTS@0910cb76bcd85df56bf43654bb31427647cdfd0d#egg=TTS
```
Download the files from this repo, then run:
```
tts --text "text" --model_path model.pth --encoder_path SE_checkpoint.pth.tar --encoder_config_path config_se.json --config_path config.json --speakers_file_path speakers.pth --speaker_wav conditioning_audio.wav --out_path out.wav
```
Where the conditioning audio is a wav file(s) to condition a multi-speaker TTS model with a Speaker Encoder, you can give multiple file paths. The d_vectors is computed as their average.
# References
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information should go in this section. -->
[1] [YourTTS paper](https://arxiv.org/pdf/2112.02418.pdf)
|
ShirinP/t5-small-finetuned-dialogsum | ShirinP | 2023-02-07T10:05:23Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-01-25T04:33:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-dialogsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-dialogsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2771
- Rouge1: 36.5788
- Rouge2: 13.75
- Rougel: 30.9066
- Rougelsum: 32.8118
- Gen Len: 18.846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4705 | 1.0 | 4154 | 1.3514 | 34.3952 | 11.8123 | 28.9797 | 31.003 | 18.76 |
| 1.418 | 2.0 | 8308 | 1.3023 | 35.904 | 12.9905 | 30.3195 | 32.1809 | 18.83 |
| 1.3933 | 3.0 | 12462 | 1.2832 | 36.1796 | 13.6096 | 30.6577 | 32.5292 | 18.884 |
| 1.3875 | 4.0 | 16616 | 1.2771 | 36.5788 | 13.75 | 30.9066 | 32.8118 | 18.846 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
sd-dreambooth-library/solo-levelling-art-style | sd-dreambooth-library | 2023-02-07T10:05:19Z | 21 | 14 | diffusers | [
"diffusers",
"tensorboard",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2022-11-10T08:32:31Z | ---
license: mit
---
### Solo Levelling Art Style on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### model by Classacre
This your the Stable Diffusion model fine-tuned the Solo Levelling Art Style concept taught to Stable Diffusion with Dreambooth.
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
This is my first model, criticism and advice is welcome. Discord: "Classacre#1028"
This model is inspired by @ogkalu and his comic-diffusion model (https://huggingface.co/ogkalu/Comic-Diffusion). I think its pretty cool and you should check it out.
I've made this model out of admiration towards Jang-Sung Rak (DUBU) who recently passed away. This model is not perfect, and will never be perfect as the original artists art is irreplaceable.
### Version 2.1 ###
- This new model uses the anythingv3.0 model as its base instead of the SD 1.5. This adds more dynamic backgrounds to the generations but strays abit away from the original style.
- Characters and people are the same as V2 and have been improved to better reflect Jang-Sung Raks art style.
- Action generations are often better in 2:1 ratios or 2:2 (1024 x 1024) generations. They are often incomplete in 512x512 generations.
- The calm model simmilar to version 2.0 is a good general model and may be better than the action model when generating. Play around with the instance prompts mentioned below and see what you prefer.
The calm and action models have been combined into 1 ckpt file. I've changed the naming scheme to better match the progress of the model e.g. this versions CKPT is called sololevellingV2.1
It can be used by modifying the `instance_prompt(s)`: **SLCalm** and **SLAction**
This model was trained using 20 total images (10 for calm scenes and 10 for action scenes). 2000 total training steps (1e-6). Text encoder trained for 250 steps (1e-6.). Text encoder concept training steps 533. 71 conceptualization (realisation) images.
This model still suffers from text/ chat bubbles but can be mitigated by adding it to the negative prompts (same as version 2.0).
### Version 2.0 ###
This is a massive improvement from the first version. I've split the model into two different models, one for non action generations (SoloLevellingCalm.ckpt) and one for action generations (SoloLevellingAction.ckpt). I plan on merging the two into one model in the future once I understand how to do captions. The calm (SoloLevellingCalm.ckpt) version of the model is great for general generation using most prompts, it was trained using non action images taken from the solo leveling manhwa.
**Important Prompt Additions:**
Add these prompts to make the generations look remotely like the solo levelling art style and to maintain consistency.
Positive prompts: anime, manhwa, beautiful, 8k
Negative prompts: chat bubble, chat bubbles, ugly
This model suffers from chat bubbles and added VFX words in its generations, it can often be mitigated by inputting the negative prompts in the Important prompt additions but it is not perfect.
Sampler and CFG settings are identical to Version 1.0.
### Version 1.0 ###
It can be used by modifying the `instance_prompt(s)`: **sololeveling**
This model was trained using 71 training images, 14200 total training steps, model saved every 3550 steps (25%) and text encoder was trained up to 35%. Made using Stable Diffusion v1.5 as the base model.
The final model struggles to do calm / peaceful environments as it was trained on mainly cinematic action scenes - this leads to style bleeding where the ai creates action sequences from seemingly calm and peaceful prompts. Earlier models don't seem to have this problem albeit they are not as sharp and do not reproduce the style as accurately. Negative prompts seem to lessen the effects of action sequences in the final model, however they are not as natural as older models. Another thing to mention is that the model struggles at drawing eyes in action sequences, you may be able to play with the prompt to get eyes to show up though. A comparison between the different model versions can be seen below:
Sampler used: DDIM
CFG: 7
Prompt: man holding a sword, black hair, muscular, in a library, cinematic, full color, fighting a man
(https://i.imgur.com/MBjzUVI.jpg)
man eating food in the subway station, sololeveling, happy, cinematic, golden hour
(https://i.imgur.com/L3MB4Ka.jpg)
In my opinion this model runs best using the DDIM sampler, however I'm still pretty new to experimenting samplers and my opinion about this may change in the future. Please experiment with the different samplers yourself and choose what you believe is best. The model in 106560 steps may be better than the final model.
Here are the images used for training this concept:
sololeveling
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
|
BachNgoH/ppo-SnowballTarget | BachNgoH | 2023-02-07T09:56:50Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-07T09:56:44Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: BachNgoH/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ahjim0m0/ppo-Huggy | ahjim0m0 | 2023-02-07T09:33:40Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-02-07T09:33:33Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ahjim0m0/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MaCoCu/BERTovski | MaCoCu | 2023-02-07T09:24:34Z | 20 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"feature-extraction",
"BERTovski",
"MaCoCu",
"bg",
"mk",
"multilingual",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-08-11T08:17:04Z | ---
language:
- bg
- mk
- multilingual
license: cc0-1.0
tags:
- BERTovski
- MaCoCu
---
# Model description
**BERTovski** is a large pre-trained language model trained on Bulgarian and Macedonian texts. It was trained from scratch using the RoBERTa architecture. It was developed as part of the [MaCoCu](https://macocu.eu/) project. The main developer is [Rik van Noord](https://www.rikvannoord.nl/) from the University of Groningen.
BERTovski was trained on 74GB of text, which is equal to just over 7 billion tokens. It was trained for 300,000 steps with a batch size of 2,048, which was approximately 30 epochs.
The training and fine-tuning procedures are described in detail on our [Github repo](https://github.com/macocu/LanguageModels). We aim to train this model for even longer, so keep an eye out for newer versions!
# How to use
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("RVN/BERTovski")
model = AutoModel.from_pretrained("RVN/BERTovski") # PyTorch
model = TFAutoModel.from_pretrained("RVN/BERTovski") # Tensorflow
```
# Data
For training, we used all Bulgarian and Macedonian data that was present in the [MaCoCu](https://macocu.eu/), Oscar, mc4 and Wikipedia corpora. In a manual analysis we found that for Oscar and mc4, if the data did not come from the corresponding domain (.bg or .mk), it was often (badly) machine translated. Therefore, we opted to only use data that originally came from a .bg or .mk domain.
After de-duplicating the data, we were left with a total of 54.5 GB of Bulgarian and 9 GB of Macedonian text. Since there was quite a bit more Bulgarian data, we simply doubled the Macedonian data during training. We trained a shared vocabulary of 32,000 pieces on a subset of the data in which the Bulgarian/Macedonian split was 50/50.
# Benchmark performance
We tested performance of BERTovski on benchmarks of XPOS, UPOS and NER. For Bulgarian, we used the data from the [Universal Dependencies](https://universaldependencies.org/) project. For Macedonian, we used the data sets created in the [babushka-bench](https://github.com/clarinsi/babushka-bench/) project. We also tested on a Google (Bulgarian) and human (Macedonian) translated version of the COPA data set (for details see our [Github repo](https://github.com/RikVN/COPA)). We compare performance to the strong multi-lingual models XLMR-base and XLMR-large. For details regarding the fine-tuning procedure you can checkout our [Github](https://github.com/macocu/LanguageModels).
Scores are averages of three runs, except for COPA, for which we use 10 runs. We use the same hyperparameter settings for all models for UPOS/XPOS/NER, for COPA we optimized the learning rate on the dev set.
## Bulgarian
| | **UPOS** | **UPOS** | **XPOS** | **XPOS** | **NER** | **NER** | **COPA** |
|-----------------|:--------:|:--------:|:--------:|:--------:|:-------:|:--------:|:--------:|
| | **Dev** | **Test** | **Dev** | **Test** | **Dev** | **Test** | **Test** |
| **XLM-R-base** | 99.2 | 99.4 | 98.0 | 98.3 | 93.2 | 92.9 | 56.9 |
| **XLM-R-large** | 99.3 | 99.4 | 97.4 | 97.7 | 93.7 | 93.5 | 53.1 |
| **BERTovski** | 98.8 | 99.1 | 97.6 | 97.8 | 93.5 | 93.3 | 51.7 |
## Macedonian
| | **UPOS** | **UPOS** | **XPOS** | **XPOS** | **NER** | **NER** | **COPA** |
|-----------------|:--------:|:--------:|:--------:|:--------:|:-------:|:--------:|:--------:|
| | **Dev** | **Test** | **Dev** | **Test** | **Dev** | **Test** | **Test** |
| **XLM-R-base** | 98.3 | 98.6 | 97.3 | 97.1 | 92.8 | 94.8 | 55.3 |
| **XLM-R-large** | 98.3 | 98.7 | 97.7 | 97.5 | 93.3 | 95.1 | 52.5 |
| **BERTovski** | 97.8 | 98.1 | 96.4 | 96.0 | 92.8 | 94.6 | 51.8 |
# Acknowledgements
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). The authors received funding from the European Union's Connecting Europe Facility 2014-
2020 - CEF Telecom, under Grant Agreement No.INEA/CEF/ICT/A2020/2278341 (MaCoCu).
# Citation
If you use this model, please cite the following paper:
```bibtex
@inproceedings{non-etal-2022-macocu,
title = "{M}a{C}o{C}u: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages",
author = "Ba{\~n}{\'o}n, Marta and
Espl{\`a}-Gomis, Miquel and
Forcada, Mikel L. and
Garc{\'\i}a-Romero, Cristian and
Kuzman, Taja and
Ljube{\v{s}}i{\'c}, Nikola and
van Noord, Rik and
Sempere, Leopoldo Pla and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Rupnik, Peter and
Suchomel, V{\'\i}t and
Toral, Antonio and
van der Werff, Tobias and
Zaragoza, Jaume",
booktitle = "Proceedings of the 23rd Annual Conference of the European Association for Machine Translation",
month = jun,
year = "2022",
address = "Ghent, Belgium",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2022.eamt-1.41",
pages = "303--304"
}
``` |
Erwanlbv/Reinforce-model-500 | Erwanlbv | 2023-02-07T09:23:01Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T09:22:40Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-model-500
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Airic/rpg | Airic | 2023-02-07T09:13:04Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-02-07T04:14:16Z | ---
license: creativeml-openrail-m
---
|
raw-vitor/henry | raw-vitor | 2023-02-07T09:11:31Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-02-07T09:00:25Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### henry Dreambooth model trained by raw-vitor with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_data_aug_rte_256 | gokuls | 2023-02-07T09:04:00Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-07T08:43:04Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_data_aug_rte_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.4981949458483754
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_data_aug_rte_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5461
- Accuracy: 0.4982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3321 | 1.0 | 568 | 0.5461 | 0.4982 |
| 0.288 | 2.0 | 1136 | 0.5692 | 0.4910 |
| 0.2847 | 3.0 | 1704 | 0.5578 | 0.4982 |
| 0.283 | 4.0 | 2272 | 0.5487 | 0.4946 |
| 0.2822 | 5.0 | 2840 | 0.5564 | 0.4982 |
| 0.2813 | 6.0 | 3408 | 0.5508 | 0.5235 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gyeoldere/DeBERTa-finetuned-SNLI | gyeoldere | 2023-02-07T08:42:01Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"deberta",
"generated_from_trainer",
"dataset:snli",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2023-02-01T08:39:44Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- snli
model-index:
- name: DeBERTa-finetuned-SNLI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa-finetuned-SNLI
This model is a fine-tuned version of [gyeoldere/DeBERTa-finetuned-SNLI](https://huggingface.co/gyeoldere/DeBERTa-finetuned-SNLI) on the snli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qqp_256 | gokuls | 2023-02-07T08:41:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-06T15:34:44Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qqp_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.6342567400445214
- name: F1
type: f1
value: 0.014791125324805117
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qqp_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7043
- Accuracy: 0.6343
- F1: 0.0148
- Combined Score: 0.3245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:--------------:|
| 0.8369 | 1.0 | 29671 | 0.7043 | 0.6343 | 0.0148 | 0.3245 |
| 0.7448 | 2.0 | 59342 | 0.7161 | 0.6355 | 0.0216 | 0.3286 |
| 0.7106 | 3.0 | 89013 | 0.7067 | 0.6466 | 0.0843 | 0.3655 |
| 0.6924 | 4.0 | 118684 | 0.7200 | 0.6401 | 0.0477 | 0.3439 |
| 0.6812 | 5.0 | 148355 | 0.7109 | 0.6424 | 0.0609 | 0.3517 |
| 0.6734 | 6.0 | 178026 | 0.7092 | 0.6440 | 0.0696 | 0.3568 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
iubeda/ppo-Huggy | iubeda | 2023-02-07T08:34:28Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-02-07T08:34:21Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: iubeda/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
roapple10/ppo-SnowballTarget | roapple10 | 2023-02-07T08:28:46Z | 9 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-02-07T08:25:54Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/ThomasSimonini/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: roapple10/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sayakpaul/segformer-b0-scene-parse-150-lora | sayakpaul | 2023-02-07T08:23:11Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2023-02-07T05:58:13Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150-lora
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
maskip/pretrained-m-bert-100 | maskip | 2023-02-07T08:01:53Z | 1 | 0 | transformers | [
"transformers",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
]
| null | 2023-02-07T07:55:00Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: pretrained-m-bert-100
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pretrained-m-bert-100
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.7003
- Validation Loss: 15.3566
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2669 | 10.9400 | 0 |
| 7.8880 | 10.8967 | 1 |
| 6.8580 | 11.5024 | 2 |
| 6.4321 | 11.5023 | 3 |
| 6.2235 | 11.2212 | 4 |
| 6.0038 | 11.3128 | 5 |
| 5.9881 | 11.3604 | 6 |
| 5.4409 | 11.6872 | 7 |
| 5.2113 | 11.5379 | 8 |
| 5.2660 | 12.0264 | 9 |
| 5.2330 | 11.7627 | 10 |
| 5.1121 | 12.2919 | 11 |
| 5.2126 | 12.6272 | 12 |
| 5.2086 | 11.3478 | 13 |
| 5.2459 | 12.2183 | 14 |
| 5.0035 | 11.7580 | 15 |
| 4.9613 | 12.4852 | 16 |
| 5.0312 | 12.4627 | 17 |
| 5.0073 | 13.6309 | 18 |
| 5.4284 | 12.7799 | 19 |
| 5.3100 | 12.6417 | 20 |
| 5.0765 | 12.7851 | 21 |
| 5.2276 | 13.3828 | 22 |
| 5.1986 | 12.7421 | 23 |
| 4.8935 | 12.8679 | 24 |
| 4.6959 | 12.9201 | 25 |
| 5.4161 | 13.4416 | 26 |
| 5.2459 | 14.0112 | 27 |
| 5.2781 | 13.2740 | 28 |
| 5.5104 | 12.8646 | 29 |
| 5.5024 | 13.7514 | 30 |
| 5.6284 | 13.7125 | 31 |
| 5.8452 | 13.6332 | 32 |
| 5.5767 | 13.8019 | 33 |
| 5.6444 | 13.4279 | 34 |
| 5.5551 | 13.2666 | 35 |
| 5.5421 | 13.5996 | 36 |
| 5.5246 | 13.1686 | 37 |
| 5.5233 | 13.3788 | 38 |
| 5.6011 | 13.4038 | 39 |
| 5.3695 | 13.5241 | 40 |
| 5.5061 | 13.6035 | 41 |
| 5.4534 | 13.8652 | 42 |
| 5.4222 | 13.4525 | 43 |
| 5.4408 | 13.6572 | 44 |
| 5.6683 | 13.7671 | 45 |
| 5.7137 | 14.1255 | 46 |
| 5.6777 | 14.4026 | 47 |
| 5.6776 | 14.3435 | 48 |
| 5.8337 | 14.3650 | 49 |
| 5.8583 | 14.2897 | 50 |
| 5.6849 | 14.6518 | 51 |
| 5.7112 | 14.5420 | 52 |
| 5.7281 | 13.9947 | 53 |
| 5.9154 | 14.3210 | 54 |
| 5.6742 | 13.8867 | 55 |
| 5.8674 | 14.2819 | 56 |
| 5.7128 | 14.5811 | 57 |
| 5.7091 | 14.2113 | 58 |
| 5.7479 | 14.4418 | 59 |
| 5.7632 | 13.9566 | 60 |
| 5.6443 | 14.1394 | 61 |
| 5.6794 | 14.5981 | 62 |
| 5.6450 | 14.5139 | 63 |
| 5.6935 | 14.3309 | 64 |
| 5.7443 | 14.3540 | 65 |
| 5.7014 | 14.7472 | 66 |
| 5.7407 | 14.4245 | 67 |
| 5.9023 | 14.4602 | 68 |
| 5.9222 | 14.6654 | 69 |
| 5.6813 | 14.3179 | 70 |
| 5.6505 | 14.1670 | 71 |
| 5.8407 | 14.2520 | 72 |
| 5.6683 | 14.1696 | 73 |
| 5.6880 | 15.1198 | 74 |
| 5.8254 | 14.2783 | 75 |
| 5.7758 | 14.5934 | 76 |
| 5.7180 | 14.4779 | 77 |
| 5.7348 | 14.3955 | 78 |
| 5.6680 | 14.0637 | 79 |
| 5.7029 | 14.6120 | 80 |
| 5.7088 | 14.3396 | 81 |
| 5.7215 | 14.5878 | 82 |
| 5.5987 | 15.0465 | 83 |
| 5.7613 | 14.7521 | 84 |
| 5.7670 | 14.9828 | 85 |
| 5.7954 | 14.6714 | 86 |
| 5.6080 | 15.2686 | 87 |
| 5.7493 | 14.8772 | 88 |
| 5.6884 | 14.4567 | 89 |
| 5.6932 | 14.3316 | 90 |
| 5.7152 | 15.2725 | 91 |
| 5.6548 | 15.0855 | 92 |
| 5.6196 | 14.8487 | 93 |
| 5.7889 | 14.7169 | 94 |
| 5.5958 | 14.9320 | 95 |
| 5.7047 | 14.8829 | 96 |
| 5.5637 | 14.8704 | 97 |
| 5.6375 | 14.7917 | 98 |
| 5.7003 | 15.3566 | 99 |
### Framework versions
- Transformers 4.27.0.dev0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
imflash217/a2c-AntBulletEnv-v0 | imflash217 | 2023-02-07T07:59:42Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T07:58:29Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1902.19 +/- 153.27
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_wnli_128 | gokuls | 2023-02-07T07:30:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-07T06:57:42Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_wnli_128
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.14084507042253522
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_wnli_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5913
- Accuracy: 0.1408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3404 | 1.0 | 435 | 0.5913 | 0.1408 |
| 0.3027 | 2.0 | 870 | 0.5985 | 0.1127 |
| 0.2935 | 3.0 | 1305 | 0.6351 | 0.1127 |
| 0.2884 | 4.0 | 1740 | 0.6013 | 0.0986 |
| 0.2838 | 5.0 | 2175 | 0.6154 | 0.0986 |
| 0.2788 | 6.0 | 2610 | 0.6608 | 0.0845 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
eshwarprasadS/ppo-Huggy | eshwarprasadS | 2023-02-07T07:27:38Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-02-07T07:27:31Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: eshwarprasadS/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lucataco/pokemon-lora | lucataco | 2023-02-07T06:59:13Z | 4 | 2 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-02-06T23:04:35Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - https://huggingface.co/lucataco/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_stsb_128 | gokuls | 2023-02-07T06:56:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-07T00:30:23Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_stsb_128
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.15823601400463258
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_stsb_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4602
- Pearson: 0.1596
- Spearmanr: 0.1582
- Combined Score: 0.1589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------:|:--------------:|
| 0.5444 | 1.0 | 2518 | 1.4965 | 0.1589 | 0.1763 | 0.1676 |
| 0.3254 | 2.0 | 5036 | 1.5276 | 0.1502 | 0.1674 | 0.1588 |
| 0.2847 | 3.0 | 7554 | 1.5430 | 0.1587 | 0.1680 | 0.1634 |
| 0.2376 | 4.0 | 10072 | 1.6906 | 0.1669 | 0.1786 | 0.1728 |
| 0.1741 | 5.0 | 12590 | 1.4788 | 0.1662 | 0.1725 | 0.1694 |
| 0.1315 | 6.0 | 15108 | 1.5662 | 0.1640 | 0.1700 | 0.1670 |
| 0.1055 | 7.0 | 17626 | 1.5100 | 0.1663 | 0.1698 | 0.1680 |
| 0.0879 | 8.0 | 20144 | 1.4602 | 0.1596 | 0.1582 | 0.1589 |
| 0.0739 | 9.0 | 22662 | 1.6612 | 0.1584 | 0.1621 | 0.1603 |
| 0.0632 | 10.0 | 25180 | 1.5825 | 0.1489 | 0.1547 | 0.1518 |
| 0.0548 | 11.0 | 27698 | 1.5946 | 0.1421 | 0.1461 | 0.1441 |
| 0.0473 | 12.0 | 30216 | 1.6515 | 0.1526 | 0.1548 | 0.1537 |
| 0.0415 | 13.0 | 32734 | 1.6544 | 0.1506 | 0.1478 | 0.1492 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BruceLin/whisper-small-Chinese-HK | BruceLin | 2023-02-07T06:48:48Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-05T05:31:37Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
pfunk/Pong-v4-DQPN_p10_pt0.1-seed1 | pfunk | 2023-02-07T06:47:49Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Pong-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-02-07T06:47:29Z | ---
tags:
- Pong-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-v4
type: Pong-v4
metrics:
- type: mean_reward
value: 5.90 +/- 4.99
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p10_pt0.1.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p10_pt0.1]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p10_pt0.1 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_pt0.1-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_pt0.1-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p10_pt0.1-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p10_pt0.1 --start-policy-f 10000 --end-policy-f 10000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 10000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p10_pt0.1',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 0.1,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 10000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
LowGI/STT_Model_9 | LowGI | 2023-02-07T06:43:58Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-02-07T03:02:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: STT_Model_9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# STT_Model_9
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2506
- Wer: 0.1718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Dataset info
- Name: LJSpeech
- Source: https://www.kaggle.com/datasets/mathurinache/the-lj-speech-dataset
- Total audios (in Google Drive): 1420
- Total transcripts (in Google Drive): 13100
- No. of rows selected: 500
- Train-test ratio: 70:30
- No. of training set: 350
- No. of testing set: 150
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 4.55 | 200 | 2.9217 | 0.9846 |
| No log | 9.09 | 400 | 1.2293 | 0.7093 |
| 2.3111 | 13.64 | 600 | 0.3885 | 0.3602 |
| 2.3111 | 18.18 | 800 | 0.3123 | 0.3097 |
| 0.2471 | 22.73 | 1000 | 0.3094 | 0.2737 |
| 0.2471 | 27.27 | 1200 | 0.3007 | 0.2537 |
| 0.2471 | 31.82 | 1400 | 0.2650 | 0.2008 |
| 0.0853 | 36.36 | 1600 | 0.2599 | 0.1884 |
| 0.0853 | 40.91 | 1800 | 0.2462 | 0.1734 |
| 0.0344 | 45.45 | 2000 | 0.2663 | 0.1730 |
| 0.0344 | 50.0 | 2200 | 0.2506 | 0.1718 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
CoreyMorris/poca-SoccerTwos-football-is-life | CoreyMorris | 2023-02-07T05:35:01Z | 34 | 1 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-02-07T05:34:53Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: CoreyMorris/poca-SoccerTwos-football-is-life
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sayakpaul/vit-base-patch16-224-in21k-finetuned-lora-food101 | sayakpaul | 2023-02-07T05:27:14Z | 49 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-02-07T02:43:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-lora-food101
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.96
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-lora-food101
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1448
- Accuracy: 0.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 9 | 0.5069 | 0.896 |
| 2.1627 | 2.0 | 18 | 0.1891 | 0.946 |
| 0.3451 | 3.0 | 27 | 0.1448 | 0.96 |
| 0.2116 | 4.0 | 36 | 0.1509 | 0.958 |
| 0.1711 | 5.0 | 45 | 0.1498 | 0.958 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ernie-ai/document-language-class-ar-en-zh | ernie-ai | 2023-02-07T05:19:39Z | 22 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-02-07T05:19:28Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: document-language-class-ar-en-zh
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8111110925674438
---
# document-language-class-ar-en-zh
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### abstract art lines

#### arabic document

#### chinese document

#### english document
 |
Subsets and Splits