modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
aszfcxcgszdx/samsum | aszfcxcgszdx | 2023-03-15T14:30:17Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:aszfcxcgszdx/autotrain-data-samsum-auto",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-03-15T14:25:51Z | ---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- aszfcxcgszdx/autotrain-data-samsum-auto
co2_eq_emissions:
emissions: 0.0077793677303344775
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 41244106342
- CO2 Emissions (in grams): 0.0078
## Validation Metrics
- Loss: 1.565
- Rouge1: 47.592
- Rouge2: 23.270
- RougeL: 39.623
- RougeLsum: 43.180
- Gen Len: 18.305
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/aszfcxcgszdx/autotrain-samsum-auto-41244106342
``` |
aszfcxcgszdx/multilingual-samsum | aszfcxcgszdx | 2023-03-15T14:29:30Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:aszfcxcgszdx/autotrain-data-multi-lingual-summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-03-15T13:54:42Z | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- aszfcxcgszdx/autotrain-data-multi-lingual-summarization
co2_eq_emissions:
emissions: 13.328572874208332
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 41234106312
- CO2 Emissions (in grams): 13.3286
## Validation Metrics
- Loss: 1.508
- Rouge1: 44.068
- Rouge2: 20.883
- RougeL: 37.071
- RougeLsum: 40.613
- Gen Len: 17.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/aszfcxcgszdx/autotrain-multi-lingual-summarization-41234106312
``` |
aszfcxcgszdx/mt5-large-samsum | aszfcxcgszdx | 2023-03-15T14:27:58Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:aszfcxcgszdx/autotrain-data-multi-lingual-summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-03-15T13:54:46Z | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- aszfcxcgszdx/autotrain-data-multi-lingual-summarization
co2_eq_emissions:
emissions: 12.703463244389663
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 41234106313
- CO2 Emissions (in grams): 12.7035
## Validation Metrics
- Loss: 1.508
- Rouge1: 44.142
- Rouge2: 21.000
- RougeL: 37.127
- RougeLsum: 40.611
- Gen Len: 17.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/aszfcxcgszdx/autotrain-multi-lingual-summarization-41234106313
``` |
quilaquedi/ppo-LunarLander-v2 | quilaquedi | 2023-03-15T14:17:11Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T08:48:03Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.86 +/- 21.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MarkieMark1/a2c-AntBulletEnv-v0 | MarkieMark1 | 2023-03-15T14:14:04Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T14:12:57Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1450.36 +/- 87.93
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nouman-10/fine-tune-bert-combined-mlm | nouman-10 | 2023-03-15T14:03:55Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-03-15T12:49:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: unsupervised-fine-tune-bert-cased-combined
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unsupervised-fine-tune-bert-cased-combined
This model is a fine-tuned version of [nouman-10/unsupervised-comb-cased](https://huggingface.co/nouman-10/unsupervised-comb-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4579
- Accuracy: 0.7384
- F1: 0.7384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.4463 | 1.0 | 1819 | 0.7093 | 0.6483 | 0.6483 |
| 0.3304 | 2.0 | 3638 | 0.5988 | 0.7471 | 0.7471 |
| 0.211 | 3.0 | 5457 | 0.8888 | 0.75 | 0.75 |
| 0.1237 | 4.0 | 7276 | 1.4573 | 0.7355 | 0.7355 |
| 0.0959 | 5.0 | 9095 | 1.7000 | 0.7355 | 0.7355 |
| 0.062 | 6.0 | 10914 | 2.0796 | 0.7064 | 0.7064 |
| 0.0347 | 7.0 | 12733 | 1.7562 | 0.7558 | 0.7558 |
| 0.0259 | 8.0 | 14552 | 2.3160 | 0.7267 | 0.7267 |
| 0.0166 | 9.0 | 16371 | 2.3301 | 0.7471 | 0.7471 |
| 0.0091 | 10.0 | 18190 | 2.4579 | 0.7384 | 0.7384 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Prgrg/ja-en-JESC-v3.0 | Prgrg | 2023-03-15T13:59:40Z | 69 | 0 | transformers | [
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T11:51:43Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Prgrg/ja-en-JESC-v3.0
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Prgrg/ja-en-JESC-v3.0
This model is a fine-tuned version of [Prgrg/ja-en-JESC-v2.0](https://huggingface.co/Prgrg/ja-en-JESC-v2.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.8267
- Validation Loss: 7.8094
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0005, 'decay_steps': 150000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.001}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.3432 | 6.9622 | 0 |
| 5.2217 | 7.5277 | 1 |
| 5.1853 | 7.5818 | 2 |
| 4.9986 | 7.5179 | 3 |
| 4.8957 | 7.7693 | 4 |
| 4.8267 | 7.8094 | 5 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
psheaton/RoBERTa_for_eyewitness_confidence | psheaton | 2023-03-15T13:58:23Z | 110 | 2 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"legal",
"en",
"license:afl-3.0",
"autotrain_compatible",
"region:us"
] | text-classification | 2023-03-15T13:40:16Z | ---
inference: false
license: afl-3.0
language:
- en
library_name: transformers
pipeline_tag: text-classification
tags:
- legal
--- |
lora-library/girlwyt | lora-library | 2023-03-15T13:57:36Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-03-15T13:57:32Z | ---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: wyt
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - girlwyt
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "wyt" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: wyt




|
vickylin21/Twitter_sentiment_analysis | vickylin21 | 2023-03-15T13:56:35Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-03-12T04:26:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Twitter_sentiment_analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Twitter_sentiment_analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1891
- Accuracy: 0.9275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7259 | 1.0 | 800 | 0.2336 | 0.92 |
| 0.1542 | 2.0 | 1600 | 0.1891 | 0.9275 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
aienthused/Pixelcopter-PLE-v0 | aienthused | 2023-03-15T13:56:04Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T11:20:16Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.00 +/- 40.93
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
YashGajjar/Taxi-v3_Q-agent | YashGajjar | 2023-03-15T13:53:56Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T13:53:48Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3_Q-agent
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="YashGajjar/Taxi-v3_Q-agent", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
EExe/Reinforce-cartpole | EExe | 2023-03-15T13:52:32Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T13:52:19Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
peterdamn/a2c-AntBulletEnv-v0 | peterdamn | 2023-03-15T13:52:23Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T13:51:13Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1556.86 +/- 35.82
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Christian90/pixelcoper-v1 | Christian90 | 2023-03-15T13:52:14Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T13:50:04Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pixelcoper-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -4.80 +/- 0.60
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
yyq90/dqn-SpaceInvadersNoFrameskip-v4 | yyq90 | 2023-03-15T13:46:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T13:45:26Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 873.50 +/- 316.31
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yyq90 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yyq90 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga yyq90
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
reyhanemyr/distilbert-base-cased-finetuned-paper3 | reyhanemyr | 2023-03-15T13:40:41Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-03-15T13:26:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-cased-finetuned-paper3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-paper3
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1966
- Precision: 0.6773
- Recall: 0.7350
- F1: 0.7050
- Accuracy: 0.9687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 73 | 0.1851 | 0.4383 | 0.4479 | 0.4431 | 0.9434 |
| No log | 2.0 | 146 | 0.1473 | 0.5455 | 0.5678 | 0.5564 | 0.9593 |
| No log | 3.0 | 219 | 0.1391 | 0.6509 | 0.6530 | 0.6520 | 0.9646 |
| No log | 4.0 | 292 | 0.1236 | 0.6552 | 0.7192 | 0.6857 | 0.9702 |
| No log | 5.0 | 365 | 0.1352 | 0.6724 | 0.7382 | 0.7038 | 0.9693 |
| No log | 6.0 | 438 | 0.1594 | 0.6746 | 0.7129 | 0.6933 | 0.9673 |
| 0.0969 | 7.0 | 511 | 0.1693 | 0.6705 | 0.7382 | 0.7027 | 0.9683 |
| 0.0969 | 8.0 | 584 | 0.1806 | 0.6923 | 0.7382 | 0.7145 | 0.9692 |
| 0.0969 | 9.0 | 657 | 0.1594 | 0.6359 | 0.7603 | 0.6925 | 0.9687 |
| 0.0969 | 10.0 | 730 | 0.1740 | 0.6946 | 0.7319 | 0.7127 | 0.9683 |
| 0.0969 | 11.0 | 803 | 0.1881 | 0.6735 | 0.7287 | 0.7 | 0.9677 |
| 0.0969 | 12.0 | 876 | 0.1932 | 0.7064 | 0.7287 | 0.7174 | 0.9692 |
| 0.0969 | 13.0 | 949 | 0.1890 | 0.6907 | 0.7256 | 0.7077 | 0.9689 |
| 0.0025 | 14.0 | 1022 | 0.1860 | 0.6705 | 0.7445 | 0.7055 | 0.9696 |
| 0.0025 | 15.0 | 1095 | 0.1951 | 0.6706 | 0.7256 | 0.6970 | 0.9688 |
| 0.0025 | 16.0 | 1168 | 0.1936 | 0.6648 | 0.7319 | 0.6967 | 0.9681 |
| 0.0025 | 17.0 | 1241 | 0.1969 | 0.6725 | 0.7319 | 0.7009 | 0.9686 |
| 0.0025 | 18.0 | 1314 | 0.1953 | 0.6792 | 0.7413 | 0.7089 | 0.9692 |
| 0.0025 | 19.0 | 1387 | 0.1960 | 0.6754 | 0.7350 | 0.7039 | 0.9687 |
| 0.0025 | 20.0 | 1460 | 0.1966 | 0.6773 | 0.7350 | 0.7050 | 0.9687 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
avuhong/ParvoGPT2 | avuhong | 2023-03-15T13:36:45Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-03-15T13:27:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [nferruz/ProtGPT2](https://huggingface.co/nferruz/ProtGPT2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6699
- Accuracy: 0.7571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 220 | 3.8564 | 0.4857 |
| No log | 2.0 | 440 | 2.7515 | 0.6096 |
| 4.1568 | 3.0 | 660 | 2.2463 | 0.6780 |
| 4.1568 | 4.0 | 880 | 1.9817 | 0.7152 |
| 2.2818 | 5.0 | 1100 | 1.8278 | 0.7353 |
| 2.2818 | 6.0 | 1320 | 1.7313 | 0.7486 |
| 1.8444 | 7.0 | 1540 | 1.6847 | 0.7553 |
| 1.8444 | 8.0 | 1760 | 1.6699 | 0.7571 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
DipeshY/roberta-finetuned-disaster_type_dy | DipeshY | 2023-03-15T13:26:51Z | 131 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-03-15T12:54:25Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-disaster_type_dy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-disaster_type_dy
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cpu
- Datasets 2.10.1
- Tokenizers 0.13.2
|
TiborUdvari/distilgpt2-test-douglas-finetuned-hitchhiker | TiborUdvari | 2023-03-15T13:15:04Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-03-15T13:05:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-test-douglas-finetuned-hitchhiker
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-test-douglas-finetuned-hitchhiker
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 83 | 4.3445 |
| No log | 2.0 | 166 | 4.1845 |
| No log | 3.0 | 249 | 4.1353 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
stelladk/A2C-AntBulletEnv-v0 | stelladk | 2023-03-15T13:01:15Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T11:36:46Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1572.51 +/- 52.53
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ThoDum/a2c-PandaReachDense-v2 | ThoDum | 2023-03-15T13:00:02Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T11:44:12Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.71 +/- 0.25
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hmatzner/ppo-PyramidsRND | hmatzner | 2023-03-15T12:58:08Z | 9 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-03-15T12:58:03Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: hmatzner/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Christian90/CartPole-v1 | Christian90 | 2023-03-15T12:57:23Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T12:57:18Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 370.30 +/- 191.91
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nouman-10/unsupervised-comb-cased | nouman-10 | 2023-03-15T12:49:10Z | 87 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-03-15T12:25:18Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: unsupervised-comb-cased
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# unsupervised-comb-cased
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.9912
- Validation Loss: 3.1077
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -711, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4995 | 3.2941 | 0 |
| 3.1428 | 3.1982 | 1 |
| 2.9912 | 3.1077 | 2 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Yoshiii/opt-6.7b-lora | Yoshiii | 2023-03-15T12:32:36Z | 0 | 2 | null | [
"license:unlicense",
"region:us"
] | null | 2023-03-15T11:09:03Z | ---
license: unlicense
---
Running opt-6.7b with added loras locally on windows!
# bitsandbytes
I needed to get bitsandbytes working in my venv:
I replaced the main.py in C:\Users\user\Desktop\test\peft\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py with the one here!
I also added a .dll file here: C:\Users\user\Desktop\test\peft\venv\Lib\site-packages\bitsandbytes\libbitsandbytes_cuda116.dll
# Training Script
(https://github.com/huggingface/peft/commit/df0e1fb59266c9903ddd6dbfe7339bcd2068d150) (It's from their notebook!)
```
#load
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import torch
import torch.nn as nn
import bitsandbytes as bnb
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"facebook/opt-6.7b",
load_in_8bit=True,
device_map='auto',
)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b")
#post-processing
for param in model.parameters():
param.requires_grad = False # freeze the model - train adapters later
if param.ndim == 1:
# cast the small parameters (e.g. layernorm) to fp32 for stability
param.data = param.data.to(torch.float32)
model.gradient_checkpointing_enable() # reduce number of stored activations
model.enable_input_require_grads()
class CastOutputToFloat(nn.Sequential):
def forward(self, x): return super().forward(x).to(torch.float32)
model.lm_head = CastOutputToFloat(model.lm_head)
# apply lora
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
# apply lora 2
from peft import LoraConfig, get_peft_model
config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, config)
print_trainable_parameters(model)
# training
import transformers
from datasets import load_dataset
data = load_dataset("Abirate/english_quotes")
data = data.map(lambda samples: tokenizer(samples['quote']), batched=True)
trainer = transformers.Trainer(
model=model,
train_dataset=data['train'],
args=transformers.TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
warmup_steps=100,
max_steps=200,
learning_rate=2e-4,
fp16=True,
logging_steps=1,
output_dir='outputs'
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False)
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()
# push to huggingface txtloras
model.push_to_hub("Yoshiii/opt-6.7b-lora", use_auth_token=True)
# inference
batch = tokenizer("Two things are infinite: ", return_tensors='pt')
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=50)
print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=True))
```
# Inference (loading this repo lora from hf)
```
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "Yoshiii/opt-6.7b-lora"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
batch = tokenizer("Two things are infinite: ", return_tensors='pt')
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=50)
print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=True))
```
Two things are infinite: the universe and human stupidity; and I'm not sure about the universe. -Albert Einstein I'm not sure about the universe either.
This output is like the training data. If you run without applying the Lora, it will usually look worse. If you retrain the lora, know that your new lora is not going to output the same results, despite you using the same settings.
Inference should usually be deterministic when using the same lora, or using without lora.
Also, If you want to download and use the loras from a visible folder, here's the inference script:
```
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "./loramodel"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
batch = tokenizer("Two things are infinite: ", return_tensors='pt')
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=50)
print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=True))
```
add your adapter_config.json and your adapter_model.bin to a folder in your current directory named `loramodel`, or whatever you choose.
|
qfrodicio/ml-roberta-large-finetuned-gesture-prediction-21-classes | qfrodicio | 2023-03-15T12:29:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-03-15T11:19:48Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: ml-roberta-large-finetuned-gesture-prediction-21-classes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ml-roberta-large-finetuned-gesture-prediction-21-classes
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the validation set:
- Loss: 0.7506
- Accuracy: 0.7927
- Precision: 0.7829
- Recall: 0.7927
- F1: 0.7837
It achieves the following results on the test set:
- Loss: 0.8029
- Accuracy: 0.7720
- Precision: 0.7764
- Recall: 0.7720
- F1: 0.7636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
This model has been trained on the qfrodicio/gesture-prediction-21-classes dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- weight_decay: 0.01
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 2.4441 | 1.0 | 104 | 1.5033 | 0.5798 | 0.5071 | 0.5798 | 0.5015 |
| 1.2049 | 2.0 | 208 | 0.8885 | 0.7532 | 0.7434 | 0.7532 | 0.7331 |
| 0.7329 | 3.0 | 312 | 0.7506 | 0.7927 | 0.7829 | 0.7927 | 0.7837 |
| 0.4949 | 4.0 | 416 | 0.7801 | 0.7936 | 0.7946 | 0.7936 | 0.7866 |
| 0.3221 | 5.0 | 520 | 0.8761 | 0.7957 | 0.7889 | 0.7957 | 0.7865 |
| 0.2112 | 6.0 | 624 | 0.9118 | 0.8062 | 0.8085 | 0.8062 | 0.8004 |
| 0.1458 | 7.0 | 728 | 0.9391 | 0.8071 | 0.8057 | 0.8071 | 0.8019 |
| 0.0988 | 8.0 | 832 | 0.9592 | 0.8105 | 0.8073 | 0.8105 | 0.8065 |
| 0.0685 | 9.0 | 936 | 1.0358 | 0.8057 | 0.8043 | 0.8057 | 0.8016 |
| 0.052 | 10.0 | 1040 | 1.0511 | 0.8089 | 0.8080 | 0.8089 | 0.8037 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
pfunk/PongNoFrameskip-v4-DQN_baseline-seed4 | pfunk | 2023-03-15T12:22:13Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"PongNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T12:22:04Z | ---
tags:
- PongNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PongNoFrameskip-v4
type: PongNoFrameskip-v4
metrics:
- type: mean_reward
value: 20.16 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **PongNoFrameskip-v4**
This is a trained model of a DQN agent playing PongNoFrameskip-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQN_baseline.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQN_baseline]"
python -m cleanrl_utils.enjoy --exp-name DQN_baseline --env-id PongNoFrameskip-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DQN_baseline-seed4/raw/main/dqn_atari.py
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DQN_baseline-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DQN_baseline-seed4/raw/main/poetry.lock
poetry install --all-extras
python dqn_atari.py --exp-name DQN_baseline --seed 4 --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk
```
# Hyperparameters
```python
{'alg_type': 'dqn_atari.py',
'batch_size': 32,
'buffer_size': 1000000,
'capture_video': True,
'cuda': True,
'end_e': 0.01,
'env_id': 'PongNoFrameskip-v4',
'exp_name': 'DQN_baseline',
'exploration_fraction': 0.2,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 10000,
'save_model': True,
'seed': 4,
'start_e': 1.0,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 5000000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
joheras/xlm-roberta-base-finetuned-clinais | joheras | 2023-03-15T12:21:34Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-03-15T11:43:46Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-clinais
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-clinais
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 223 | 1.7818 |
| 1.9591 | 2.0 | 446 | 1.6896 |
| 1.9591 | 3.0 | 669 | 1.6195 |
| 1.7055 | 4.0 | 892 | 1.5804 |
| 1.7055 | 5.0 | 1115 | 1.6104 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.0
- Datasets 2.8.0
- Tokenizers 0.12.1
|
pfunk/PongNoFrameskip-v4-DQN_baseline-seed2 | pfunk | 2023-03-15T12:20:11Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"PongNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T12:20:01Z | ---
tags:
- PongNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PongNoFrameskip-v4
type: PongNoFrameskip-v4
metrics:
- type: mean_reward
value: 20.41 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **PongNoFrameskip-v4**
This is a trained model of a DQN agent playing PongNoFrameskip-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQN_baseline.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQN_baseline]"
python -m cleanrl_utils.enjoy --exp-name DQN_baseline --env-id PongNoFrameskip-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DQN_baseline-seed2/raw/main/dqn_atari.py
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DQN_baseline-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DQN_baseline-seed2/raw/main/poetry.lock
poetry install --all-extras
python dqn_atari.py --exp-name DQN_baseline --seed 2 --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk
```
# Hyperparameters
```python
{'alg_type': 'dqn_atari.py',
'batch_size': 32,
'buffer_size': 1000000,
'capture_video': True,
'cuda': True,
'end_e': 0.01,
'env_id': 'PongNoFrameskip-v4',
'exp_name': 'DQN_baseline',
'exploration_fraction': 0.2,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 10000,
'save_model': True,
'seed': 2,
'start_e': 1.0,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 5000000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/PongNoFrameskip-v4-DQN_baseline-seed3 | pfunk | 2023-03-15T12:11:56Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"PongNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T12:11:48Z | ---
tags:
- PongNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PongNoFrameskip-v4
type: PongNoFrameskip-v4
metrics:
- type: mean_reward
value: 20.33 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **PongNoFrameskip-v4**
This is a trained model of a DQN agent playing PongNoFrameskip-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQN_baseline.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQN_baseline]"
python -m cleanrl_utils.enjoy --exp-name DQN_baseline --env-id PongNoFrameskip-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DQN_baseline-seed3/raw/main/dqn_atari.py
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DQN_baseline-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-DQN_baseline-seed3/raw/main/poetry.lock
poetry install --all-extras
python dqn_atari.py --exp-name DQN_baseline --seed 3 --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk
```
# Hyperparameters
```python
{'alg_type': 'dqn_atari.py',
'batch_size': 32,
'buffer_size': 1000000,
'capture_video': True,
'cuda': True,
'end_e': 0.01,
'env_id': 'PongNoFrameskip-v4',
'exp_name': 'DQN_baseline',
'exploration_fraction': 0.2,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 10000,
'save_model': True,
'seed': 3,
'start_e': 1.0,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 5000000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
shark123/text-to-sparql-LCQUAD | shark123 | 2023-03-15T12:10:19Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T11:40:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: text-to-sparql-LCQUAD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-LCQUAD
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Gen Len: 19.0
- Bertscorer-p: 0.2282
- Bertscorer-r: -0.5504
- Bertscorer-f1: -0.1909
- Sacrebleu-score: 0.0000
- Sacrebleu-precisions: [100.0, 100.0, 100.0, 100.0]
- Bleu-bp: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:----------------------------:|:-------:|
| 0.0019 | 1.0 | 2491 | 0.0000 | 19.0 | 0.2282 | -0.5504 | -0.1909 | 0.0000 | [100.0, 100.0, 100.0, 100.0] | 0.0000 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
vocabtrimmer/mt5-small-esquad-qg-trimmed-es-120000 | vocabtrimmer | 2023-03-15T11:53:50Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T11:36:51Z | # Vocabulary Trimmed [lmqg/mt5-small-esquad-qg](https://huggingface.co/lmqg/mt5-small-esquad-qg): `vocabtrimmer/mt5-small-esquad-qg-trimmed-es-120000`
This model is a trimmed version of [lmqg/mt5-small-esquad-qg](https://huggingface.co/lmqg/mt5-small-esquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-esquad-qg | vocabtrimmer/mt5-small-esquad-qg-trimmed-es-120000 |
|:---------------------------|:---------------------------|:-----------------------------------------------------|
| parameter_size_full | 300,165,504 | 166,944,128 |
| parameter_size_embedding | 256,103,424 | 122,882,048 |
| vocab_size | 250,101 | 120,002 |
| compression_rate_full | 100.0 | 55.62 |
| compression_rate_embedding | 100.0 | 47.98 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | 120000 | 2 | |
KarosY/lianjia_3l_881per100_1e-3 | KarosY | 2023-03-15T11:46:44Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-03-15T04:04:49Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - https://huggingface.co/KarosY/lianjia_3l_881per100_1e-3
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were fine-tuned on the None dataset. You can find some example images in the following.




|
peterdamn/ppo-Pyramids | peterdamn | 2023-03-15T11:44:00Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-03-15T11:43:55Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: peterdamn/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vocabtrimmer/mt5-small-ruquad-qg-trimmed-ru-60000 | vocabtrimmer | 2023-03-15T11:34:47Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T11:20:20Z | # Vocabulary Trimmed [lmqg/mt5-small-ruquad-qg](https://huggingface.co/lmqg/mt5-small-ruquad-qg): `vocabtrimmer/mt5-small-ruquad-qg-trimmed-ru-60000`
This model is a trimmed version of [lmqg/mt5-small-ruquad-qg](https://huggingface.co/lmqg/mt5-small-ruquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-ruquad-qg | vocabtrimmer/mt5-small-ruquad-qg-trimmed-ru-60000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 105,504,128 |
| parameter_size_embedding | 256,103,424 | 61,442,048 |
| vocab_size | 250,101 | 60,002 |
| compression_rate_full | 100.0 | 35.15 |
| compression_rate_embedding | 100.0 | 23.99 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ru | vocabtrimmer/mc4_validation | text | ru | validation | 60000 | 2 | |
vocabtrimmer/mt5-small-koquad-qg-trimmed-ko-60000 | vocabtrimmer | 2023-03-15T11:33:41Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T11:19:04Z | # Vocabulary Trimmed [lmqg/mt5-small-koquad-qg](https://huggingface.co/lmqg/mt5-small-koquad-qg): `vocabtrimmer/mt5-small-koquad-qg-trimmed-ko-60000`
This model is a trimmed version of [lmqg/mt5-small-koquad-qg](https://huggingface.co/lmqg/mt5-small-koquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-koquad-qg | vocabtrimmer/mt5-small-koquad-qg-trimmed-ko-60000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 105,504,128 |
| parameter_size_embedding | 256,103,424 | 61,442,048 |
| vocab_size | 250,101 | 60,002 |
| compression_rate_full | 100.0 | 35.15 |
| compression_rate_embedding | 100.0 | 23.99 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ko | vocabtrimmer/mc4_validation | text | ko | validation | 60000 | 2 | |
vocabtrimmer/mt5-small-itquad-qg-trimmed-it-60000 | vocabtrimmer | 2023-03-15T11:29:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T11:16:20Z | # Vocabulary Trimmed [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg): `vocabtrimmer/mt5-small-itquad-qg-trimmed-it-60000`
This model is a trimmed version of [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-itquad-qg | vocabtrimmer/mt5-small-itquad-qg-trimmed-it-60000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 105,504,128 |
| parameter_size_embedding | 256,103,424 | 61,442,048 |
| vocab_size | 250,101 | 60,002 |
| compression_rate_full | 100.0 | 35.15 |
| compression_rate_embedding | 100.0 | 23.99 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | 60000 | 2 | |
uygarkurt/bert-restore-punctuation-turkish-legacy | uygarkurt | 2023-03-15T11:23:20Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"punctuation",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-03-10T20:46:33Z | ---
license: mit
language:
- tr
tags:
- punctuation
widget:
text: "Türkiye toprakları üzerindeki ilk yerleşmeler Yontma Taş Devri'nde başlar Doğu Trakya'da Traklar olmak üzere Hititler Frigler Lidyalılar ve Dor istilası sonucu Yunanistan'dan kaçan Akalar tarafından kurulan İyon medeniyeti gibi çeşitli eski Anadolu medeniyetlerinin ardından Makedonya kralı Büyük İskender'in egemenliğiyle ve fetihleriyle birlikte Helenistik Dönem başladı"
---
## bert-restore-punctuation-turkish
This bert-base-cased model fine-tuned for punctuation restoration task on a mixture of open-source datasets. Model is able to predict **[! ? , . ; :]**
This model works on arbitrarily large Turkish text.
-----------------------------------------------
## Usage
```
from transformers import pipeline
classifier = pipeline("token-classification", model="uygarkurt/bert-restore-punctuation-turkish", tokenizer="uygarkurt/bert-restore-punctuation-turkish")
txt = "Türkiye toprakları üzerindeki ilk yerleşmeler Yontma Taş Devri'nde başlar Doğu Trakya'da Traklar olmak üzere Hititler Frigler Lidyalılar ve Dor istilası sonucu Yunanistan'dan kaçan Akalar tarafından kurulan İyon medeniyeti gibi çeşitli eski Anadolu medeniyetlerinin ardından Makedonya kralı Büyük İskender'in egemenliğiyle ve fetihleriyle birlikte Helenistik Dönem başladı"
print(classifier(txt))
# Output
# Türkiye toprakları üzerindeki ilk yerleşmeler Yontma Taş Devri'nde başlar. Doğu Trakya'da Traklar olmak üzere Hititler, Frigler, Lidyalılar ve Dor istilası sonucu Yunanistan'dan kaçan Akalar tarafından kurulan İyon medeniyeti gibi çeşitli eski Anadolu medeniyetlerinin ardından Makedonya kralı Büyük İskender'in egemenliğiyle ve fetihleriyle birlikte Helenistik Dönem başladı.
```
-----------------------------------------------
## Evaluation
| Epoch | Training Loss | Validation Loss | Precision | Recall | F1 | Accuracy |
| --------- | -------------|-------- | ----------|--------| --------| --------|
|1 | 0.100300 | 0.096145 | 0.862800 | 0.829047 | 0.845586 | 0.965330
|2 | 0.084200 | 0.092011 | 0.878079 | 0.830346 | 0.853546 | 0.967148
|3 | 0.075200 | 0.093337 | 0.878449 | 0.833345 | 0.855303 | 0.967539
-----------------------------------------------
## Information
This is a plot project. Depending on the demand it can be improved.
Contact [email protected] |
joheras/clinico-roberta-biomedical-finetuned | joheras | 2023-03-15T11:15:45Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-03-15T10:30:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: clinico-roberta-biomedical-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clinico-roberta-biomedical-finetuned
This model is a fine-tuned version of [joheras/roberta-base-biomedical-clinical-es-finetuned-clinais](https://huggingface.co/joheras/roberta-base-biomedical-clinical-es-finetuned-clinais) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9272
- Precision: 0.5095
- Recall: 0.6463
- F1: 0.5698
- Accuracy: 0.8623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 25 | 1.2199 | 0.0033 | 0.0053 | 0.0040 | 0.5756 |
| No log | 2.0 | 50 | 0.7306 | 0.2031 | 0.2642 | 0.2296 | 0.8021 |
| No log | 3.0 | 75 | 0.6366 | 0.2967 | 0.3811 | 0.3336 | 0.8235 |
| No log | 4.0 | 100 | 0.6135 | 0.3497 | 0.4653 | 0.3993 | 0.8304 |
| No log | 5.0 | 125 | 0.5845 | 0.3421 | 0.4537 | 0.3900 | 0.8331 |
| No log | 6.0 | 150 | 0.5697 | 0.3307 | 0.4421 | 0.3784 | 0.8390 |
| No log | 7.0 | 175 | 0.5415 | 0.3211 | 0.4495 | 0.3746 | 0.8471 |
| No log | 8.0 | 200 | 0.5430 | 0.3589 | 0.5179 | 0.4240 | 0.8567 |
| No log | 9.0 | 225 | 0.5513 | 0.3342 | 0.5474 | 0.4150 | 0.8604 |
| No log | 10.0 | 250 | 0.5681 | 0.3769 | 0.5768 | 0.4559 | 0.8582 |
| No log | 11.0 | 275 | 0.5813 | 0.3756 | 0.5863 | 0.4579 | 0.8553 |
| No log | 12.0 | 300 | 0.6096 | 0.4181 | 0.5968 | 0.4918 | 0.8574 |
| No log | 13.0 | 325 | 0.6318 | 0.3978 | 0.6042 | 0.4797 | 0.8539 |
| No log | 14.0 | 350 | 0.6309 | 0.3892 | 0.5968 | 0.4711 | 0.8553 |
| No log | 15.0 | 375 | 0.6559 | 0.3987 | 0.5968 | 0.4781 | 0.8565 |
| No log | 16.0 | 400 | 0.6391 | 0.4275 | 0.6021 | 0.5 | 0.8560 |
| No log | 17.0 | 425 | 0.6812 | 0.4388 | 0.6074 | 0.5095 | 0.8584 |
| No log | 18.0 | 450 | 0.6901 | 0.4287 | 0.6137 | 0.5048 | 0.8563 |
| No log | 19.0 | 475 | 0.6834 | 0.4572 | 0.6074 | 0.5217 | 0.8581 |
| 0.3478 | 20.0 | 500 | 0.7050 | 0.4397 | 0.6179 | 0.5138 | 0.8573 |
| 0.3478 | 21.0 | 525 | 0.7004 | 0.4462 | 0.6242 | 0.5204 | 0.8591 |
| 0.3478 | 22.0 | 550 | 0.7038 | 0.4264 | 0.6126 | 0.5028 | 0.8599 |
| 0.3478 | 23.0 | 575 | 0.7384 | 0.4416 | 0.6284 | 0.5187 | 0.8576 |
| 0.3478 | 24.0 | 600 | 0.7197 | 0.4479 | 0.62 | 0.5201 | 0.8619 |
| 0.3478 | 25.0 | 625 | 0.7412 | 0.4381 | 0.6221 | 0.5141 | 0.8559 |
| 0.3478 | 26.0 | 650 | 0.7535 | 0.4489 | 0.6242 | 0.5222 | 0.8566 |
| 0.3478 | 27.0 | 675 | 0.7534 | 0.4657 | 0.6432 | 0.5402 | 0.8586 |
| 0.3478 | 28.0 | 700 | 0.7672 | 0.4525 | 0.6168 | 0.5220 | 0.8567 |
| 0.3478 | 29.0 | 725 | 0.7680 | 0.4637 | 0.6316 | 0.5348 | 0.8599 |
| 0.3478 | 30.0 | 750 | 0.7590 | 0.4611 | 0.6242 | 0.5304 | 0.8607 |
| 0.3478 | 31.0 | 775 | 0.7671 | 0.4732 | 0.6326 | 0.5414 | 0.8625 |
| 0.3478 | 32.0 | 800 | 0.7921 | 0.4674 | 0.6337 | 0.5380 | 0.8590 |
| 0.3478 | 33.0 | 825 | 0.8037 | 0.4828 | 0.6358 | 0.5488 | 0.8574 |
| 0.3478 | 34.0 | 850 | 0.8376 | 0.4644 | 0.6242 | 0.5326 | 0.8534 |
| 0.3478 | 35.0 | 875 | 0.8346 | 0.4815 | 0.6284 | 0.5452 | 0.8552 |
| 0.3478 | 36.0 | 900 | 0.8249 | 0.4750 | 0.6305 | 0.5418 | 0.8567 |
| 0.3478 | 37.0 | 925 | 0.8420 | 0.4580 | 0.6305 | 0.5306 | 0.8548 |
| 0.3478 | 38.0 | 950 | 0.8341 | 0.4773 | 0.6305 | 0.5433 | 0.8550 |
| 0.3478 | 39.0 | 975 | 0.8085 | 0.4792 | 0.6316 | 0.5450 | 0.8653 |
| 0.0274 | 40.0 | 1000 | 0.7954 | 0.4992 | 0.6474 | 0.5637 | 0.8651 |
| 0.0274 | 41.0 | 1025 | 0.8145 | 0.4923 | 0.6421 | 0.5573 | 0.8635 |
| 0.0274 | 42.0 | 1050 | 0.8290 | 0.4911 | 0.6368 | 0.5545 | 0.8610 |
| 0.0274 | 43.0 | 1075 | 0.8468 | 0.4821 | 0.6379 | 0.5492 | 0.8571 |
| 0.0274 | 44.0 | 1100 | 0.8274 | 0.4791 | 0.6389 | 0.5476 | 0.8625 |
| 0.0274 | 45.0 | 1125 | 0.8583 | 0.4831 | 0.6305 | 0.5470 | 0.8551 |
| 0.0274 | 46.0 | 1150 | 0.8420 | 0.4726 | 0.6347 | 0.5418 | 0.8589 |
| 0.0274 | 47.0 | 1175 | 0.8631 | 0.5029 | 0.64 | 0.5632 | 0.8564 |
| 0.0274 | 48.0 | 1200 | 0.8421 | 0.4911 | 0.64 | 0.5558 | 0.8617 |
| 0.0274 | 49.0 | 1225 | 0.8564 | 0.5071 | 0.6411 | 0.5662 | 0.8631 |
| 0.0274 | 50.0 | 1250 | 0.8659 | 0.4845 | 0.6263 | 0.5464 | 0.8603 |
| 0.0274 | 51.0 | 1275 | 0.8596 | 0.4860 | 0.64 | 0.5525 | 0.8632 |
| 0.0274 | 52.0 | 1300 | 0.8713 | 0.4856 | 0.6368 | 0.5510 | 0.8593 |
| 0.0274 | 53.0 | 1325 | 0.8888 | 0.4868 | 0.64 | 0.5530 | 0.8585 |
| 0.0274 | 54.0 | 1350 | 0.8591 | 0.4816 | 0.6337 | 0.5473 | 0.8610 |
| 0.0274 | 55.0 | 1375 | 0.8755 | 0.4996 | 0.64 | 0.5611 | 0.8615 |
| 0.0274 | 56.0 | 1400 | 0.8749 | 0.5095 | 0.6484 | 0.5706 | 0.8583 |
| 0.0274 | 57.0 | 1425 | 0.8867 | 0.5025 | 0.6453 | 0.5650 | 0.8580 |
| 0.0274 | 58.0 | 1450 | 0.8905 | 0.4947 | 0.6337 | 0.5556 | 0.8579 |
| 0.0274 | 59.0 | 1475 | 0.8911 | 0.4881 | 0.6495 | 0.5574 | 0.8596 |
| 0.0099 | 60.0 | 1500 | 0.9220 | 0.4914 | 0.6347 | 0.5540 | 0.8570 |
| 0.0099 | 61.0 | 1525 | 0.8687 | 0.4786 | 0.6368 | 0.5465 | 0.8594 |
| 0.0099 | 62.0 | 1550 | 0.9080 | 0.4906 | 0.6337 | 0.5531 | 0.8575 |
| 0.0099 | 63.0 | 1575 | 0.9004 | 0.4831 | 0.6337 | 0.5483 | 0.8583 |
| 0.0099 | 64.0 | 1600 | 0.8906 | 0.4778 | 0.6337 | 0.5448 | 0.8619 |
| 0.0099 | 65.0 | 1625 | 0.8870 | 0.4959 | 0.6368 | 0.5576 | 0.8618 |
| 0.0099 | 66.0 | 1650 | 0.8843 | 0.4851 | 0.6358 | 0.5503 | 0.8611 |
| 0.0099 | 67.0 | 1675 | 0.8923 | 0.4912 | 0.6453 | 0.5578 | 0.8618 |
| 0.0099 | 68.0 | 1700 | 0.8864 | 0.4898 | 0.6337 | 0.5525 | 0.8615 |
| 0.0099 | 69.0 | 1725 | 0.8974 | 0.4943 | 0.6411 | 0.5582 | 0.8615 |
| 0.0099 | 70.0 | 1750 | 0.8851 | 0.4821 | 0.6379 | 0.5492 | 0.8611 |
| 0.0099 | 71.0 | 1775 | 0.8958 | 0.4920 | 0.6453 | 0.5583 | 0.8593 |
| 0.0099 | 72.0 | 1800 | 0.8880 | 0.4988 | 0.6411 | 0.5610 | 0.8618 |
| 0.0099 | 73.0 | 1825 | 0.8959 | 0.4852 | 0.6379 | 0.5512 | 0.8606 |
| 0.0099 | 74.0 | 1850 | 0.9036 | 0.4773 | 0.6305 | 0.5433 | 0.8598 |
| 0.0099 | 75.0 | 1875 | 0.9031 | 0.4864 | 0.6389 | 0.5523 | 0.8615 |
| 0.0099 | 76.0 | 1900 | 0.9243 | 0.4907 | 0.6368 | 0.5543 | 0.8590 |
| 0.0099 | 77.0 | 1925 | 0.9285 | 0.4877 | 0.6453 | 0.5555 | 0.8590 |
| 0.0099 | 78.0 | 1950 | 0.9261 | 0.5074 | 0.6516 | 0.5705 | 0.8598 |
| 0.0099 | 79.0 | 1975 | 0.9374 | 0.5037 | 0.64 | 0.5637 | 0.8580 |
| 0.0061 | 80.0 | 2000 | 0.9165 | 0.5021 | 0.6316 | 0.5594 | 0.8621 |
| 0.0061 | 81.0 | 2025 | 0.9307 | 0.5162 | 0.6368 | 0.5702 | 0.8582 |
| 0.0061 | 82.0 | 2050 | 0.9369 | 0.4911 | 0.6358 | 0.5541 | 0.8574 |
| 0.0061 | 83.0 | 2075 | 0.9293 | 0.5191 | 0.6421 | 0.5741 | 0.8584 |
| 0.0061 | 84.0 | 2100 | 0.9187 | 0.5004 | 0.6453 | 0.5637 | 0.8629 |
| 0.0061 | 85.0 | 2125 | 0.9293 | 0.4927 | 0.6379 | 0.5560 | 0.8623 |
| 0.0061 | 86.0 | 2150 | 0.9200 | 0.5041 | 0.6453 | 0.5660 | 0.8634 |
| 0.0061 | 87.0 | 2175 | 0.9273 | 0.4992 | 0.6421 | 0.5617 | 0.8631 |
| 0.0061 | 88.0 | 2200 | 0.9325 | 0.5021 | 0.6442 | 0.5643 | 0.8623 |
| 0.0061 | 89.0 | 2225 | 0.9245 | 0.4844 | 0.6389 | 0.5511 | 0.8630 |
| 0.0061 | 90.0 | 2250 | 0.9291 | 0.4979 | 0.6368 | 0.5589 | 0.8593 |
| 0.0061 | 91.0 | 2275 | 0.9264 | 0.5083 | 0.6432 | 0.5678 | 0.8622 |
| 0.0061 | 92.0 | 2300 | 0.9283 | 0.5025 | 0.6411 | 0.5634 | 0.8619 |
| 0.0061 | 93.0 | 2325 | 0.9264 | 0.5008 | 0.6442 | 0.5635 | 0.8613 |
| 0.0061 | 94.0 | 2350 | 0.9205 | 0.5079 | 0.6463 | 0.5688 | 0.8626 |
| 0.0061 | 95.0 | 2375 | 0.9223 | 0.5121 | 0.6484 | 0.5722 | 0.8625 |
| 0.0061 | 96.0 | 2400 | 0.9244 | 0.5045 | 0.6421 | 0.5651 | 0.8620 |
| 0.0061 | 97.0 | 2425 | 0.9248 | 0.5062 | 0.6463 | 0.5677 | 0.8622 |
| 0.0061 | 98.0 | 2450 | 0.9277 | 0.5037 | 0.6453 | 0.5658 | 0.8621 |
| 0.0061 | 99.0 | 2475 | 0.9272 | 0.5083 | 0.6463 | 0.5690 | 0.8623 |
| 0.0046 | 100.0 | 2500 | 0.9272 | 0.5095 | 0.6463 | 0.5698 | 0.8623 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.0
- Datasets 2.8.0
- Tokenizers 0.12.1
|
vocabtrimmer/mt5-small-koquad-qg-trimmed-ko-30000 | vocabtrimmer | 2023-03-15T11:15:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T11:01:04Z | # Vocabulary Trimmed [lmqg/mt5-small-koquad-qg](https://huggingface.co/lmqg/mt5-small-koquad-qg): `vocabtrimmer/mt5-small-koquad-qg-trimmed-ko-30000`
This model is a trimmed version of [lmqg/mt5-small-koquad-qg](https://huggingface.co/lmqg/mt5-small-koquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-koquad-qg | vocabtrimmer/mt5-small-koquad-qg-trimmed-ko-30000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 74,784,128 |
| parameter_size_embedding | 256,103,424 | 30,722,048 |
| vocab_size | 250,101 | 30,002 |
| compression_rate_full | 100.0 | 24.91 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ko | vocabtrimmer/mc4_validation | text | ko | validation | 30000 | 2 | |
vocabtrimmer/mt5-small-itquad-qg-trimmed-it-30000 | vocabtrimmer | 2023-03-15T11:12:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T10:59:37Z | # Vocabulary Trimmed [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg): `vocabtrimmer/mt5-small-itquad-qg-trimmed-it-30000`
This model is a trimmed version of [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-itquad-qg | vocabtrimmer/mt5-small-itquad-qg-trimmed-it-30000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 74,784,128 |
| parameter_size_embedding | 256,103,424 | 30,722,048 |
| vocab_size | 250,101 | 30,002 |
| compression_rate_full | 100.0 | 24.91 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | 30000 | 2 | |
chjun/my_awesome_model | chjun | 2023-03-15T11:07:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-03-15T06:23:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93052
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2372
- Accuracy: 0.9305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2346 | 1.0 | 1563 | 0.1895 | 0.9280 |
| 0.1531 | 2.0 | 3126 | 0.2372 | 0.9305 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
ealarcong/mt5-small-finetuned-amazon-en-es | ealarcong | 2023-03-15T11:06:04Z | 3 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-14T10:51:40Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ealarcong/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ealarcong/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.1830
- Validation Loss: 3.4080
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 11.0744 | 4.9526 | 0 |
| 6.2839 | 3.9714 | 1 |
| 5.4063 | 3.6820 | 2 |
| 4.9197 | 3.5710 | 3 |
| 4.5865 | 3.5060 | 4 |
| 4.3904 | 3.4481 | 5 |
| 4.2549 | 3.4180 | 6 |
| 4.1830 | 3.4080 | 7 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
852wa/hako | 852wa | 2023-03-15T11:06:03Z | 0 | 29 | null | [
"region:us"
] | null | 2023-03-15T08:26:35Z | # ■hakoA & hakoB




I conducted custom fine-tuning on wd15-beta2-aesthetic, which is based on the SD2.1 architecture, available at https://huggingface.co/waifu-diffusion/wd-1-5-beta2.
SD2.1系であるwd15-beta2-aesthetic
https://huggingface.co/waifu-diffusion/wd-1-5-beta2
に対して独自の追加学習を行いました。
# ■Setting
It is recommended to use "(anime:1.2)" as the prompt and "nsfw,messy,blush,nfixer" as the negative prompt. If the output is not at least 768 pixels on the shorter side, there is a possibility that the facial features may be distorted.
"(anime:1.2)" creates a flat, anime-like image style.
promptには「(anime:1.2)」
negative promptには「nsfw,messy,blush,nfixer」
を入れることをおすすめします。
「(anime:1.2)」はフラットなアニメ調のイメージになります。
短辺が768px以上での出力でない場合、顔の描画が崩れる可能性があります。
# ■Licence
Model hakoA and hakoB are released under the Fair AI Public License 1.0-SD. Please refer to the following link for the license terms: https://freedevproject.org/faipl-1.0-sd/
hakoA、hakoBはFair AI Public License 1.0-SDのライセンス下での公開です。
下記ライセンス内容を確認ください。
https://freedevproject.org/faipl-1.0-sd/

```
(anime:1.2),(hyper extreme detailed:1.0),amazing quality,Beautiful Illustration,1girl,breasts,maid_apron,happy smile,cafe with waitresses dressed in cute maid costumes
Negative prompt: nsfw,messy,blush,nfixer,
Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 1386462091, Size: 768x1152
```

```
(anime:1.2),( stylish pose:1.1), (smile:1), (king (throne:1.1) :1.3),
Negative prompt: nsfw,messy,blush,nfixer,
Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 2137539252, Size: 768x1152
```

```
(anime:1.2),(masterpiece:1.2), (high quality:1.2), (watercolor painting:1.1),anatomy,1 girl,solo,(cowboy shot:1.1), perfect face,18yo,(from front),school girl,
black hair,black cardigan,ribbon,(white hat:1.1),closed eyes,arms behind back,tree,calm,(darkness lighting:1.4),(night:1.4),
standing ,kawaii face, depth of field
Negative prompt: nsfw,messy,blush,nfixer,
Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 260664233, Size: 768x1152
```

```
(anime:1.2),(1girl, 12yo, flat:1.2)white dress outdoor
Negative prompt: nsfw,messy,blush,nfixer,
Steps: 28, Sampler: DPM++ SDE Karras, CFG scale: 5, Seed: 2617311573, Size: 768x1152
```
|
vocabtrimmer/mt5-small-ruquad-qg-trimmed-ru-15000 | vocabtrimmer | 2023-03-15T11:00:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T10:46:02Z | # Vocabulary Trimmed [lmqg/mt5-small-ruquad-qg](https://huggingface.co/lmqg/mt5-small-ruquad-qg): `vocabtrimmer/mt5-small-ruquad-qg-trimmed-ru-15000`
This model is a trimmed version of [lmqg/mt5-small-ruquad-qg](https://huggingface.co/lmqg/mt5-small-ruquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-ruquad-qg | vocabtrimmer/mt5-small-ruquad-qg-trimmed-ru-15000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 59,424,128 |
| parameter_size_embedding | 256,103,424 | 15,362,048 |
| vocab_size | 250,101 | 15,002 |
| compression_rate_full | 100.0 | 19.8 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ru | vocabtrimmer/mc4_validation | text | ru | validation | 15000 | 2 | |
vocabtrimmer/mt5-small-itquad-qg-trimmed-it-15000 | vocabtrimmer | 2023-03-15T10:57:25Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T10:44:20Z | # Vocabulary Trimmed [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg): `vocabtrimmer/mt5-small-itquad-qg-trimmed-it-15000`
This model is a trimmed version of [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-itquad-qg | vocabtrimmer/mt5-small-itquad-qg-trimmed-it-15000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 59,424,128 |
| parameter_size_embedding | 256,103,424 | 15,362,048 |
| vocab_size | 250,101 | 15,002 |
| compression_rate_full | 100.0 | 19.8 |
| compression_rate_embedding | 100.0 | 6.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | 15000 | 2 | |
vocabtrimmer/mt5-small-ruquad-qg-trimmed-ru-10000 | vocabtrimmer | 2023-03-15T10:44:50Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T10:30:38Z | # Vocabulary Trimmed [lmqg/mt5-small-ruquad-qg](https://huggingface.co/lmqg/mt5-small-ruquad-qg): `vocabtrimmer/mt5-small-ruquad-qg-trimmed-ru-10000`
This model is a trimmed version of [lmqg/mt5-small-ruquad-qg](https://huggingface.co/lmqg/mt5-small-ruquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-ruquad-qg | vocabtrimmer/mt5-small-ruquad-qg-trimmed-ru-10000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 54,305,152 |
| parameter_size_embedding | 256,103,424 | 10,243,072 |
| vocab_size | 250,101 | 10,003 |
| compression_rate_full | 100.0 | 18.09 |
| compression_rate_embedding | 100.0 | 4.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ru | vocabtrimmer/mc4_validation | text | ru | validation | 10000 | 2 | |
vocabtrimmer/mt5-small-itquad-qg-trimmed-it-10000 | vocabtrimmer | 2023-03-15T10:43:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T10:30:01Z | # Vocabulary Trimmed [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg): `vocabtrimmer/mt5-small-itquad-qg-trimmed-it-10000`
This model is a trimmed version of [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-itquad-qg | vocabtrimmer/mt5-small-itquad-qg-trimmed-it-10000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 54,304,128 |
| parameter_size_embedding | 256,103,424 | 10,242,048 |
| vocab_size | 250,101 | 10,002 |
| compression_rate_full | 100.0 | 18.09 |
| compression_rate_embedding | 100.0 | 4.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | 10000 | 2 | |
kiu020/distilbert-base-uncased-finetuned-squad | kiu020 | 2023-03-15T10:42:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-03-15T09:46:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2109 | 1.0 | 5533 | 1.1356 |
| 0.9553 | 2.0 | 11066 | 1.1270 |
| 0.739 | 3.0 | 16599 | 1.1609 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
marco-c88/distilgpt2-finetuned-wikitext2 | marco-c88 | 2023-03-15T10:40:26Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-03-15T10:11:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.0767 | 1.0 | 794 | 3.7406 |
| 3.8158 | 2.0 | 1588 | 3.6718 |
| 3.7557 | 3.0 | 2382 | 3.6302 |
| 3.6758 | 4.0 | 3176 | 3.5968 |
| 3.6383 | 5.0 | 3970 | 3.5704 |
| 3.5762 | 6.0 | 4764 | 3.5524 |
| 3.5415 | 7.0 | 5558 | 3.5360 |
| 3.5116 | 8.0 | 6352 | 3.5195 |
| 3.485 | 9.0 | 7146 | 3.5116 |
| 3.4587 | 10.0 | 7940 | 3.5033 |
| 3.429 | 11.0 | 8734 | 3.4950 |
| 3.4179 | 12.0 | 9528 | 3.4882 |
| 3.3985 | 13.0 | 10322 | 3.4845 |
| 3.3812 | 14.0 | 11116 | 3.4825 |
| 3.3671 | 15.0 | 11910 | 3.4795 |
| 3.3547 | 16.0 | 12704 | 3.4751 |
| 3.3472 | 17.0 | 13498 | 3.4744 |
| 3.3393 | 18.0 | 14292 | 3.4743 |
| 3.3334 | 19.0 | 15086 | 3.4740 |
| 3.3309 | 20.0 | 15880 | 3.4740 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
vocabtrimmer/mt5-small-esquad-qg-trimmed-es-5000 | vocabtrimmer | 2023-03-15T10:37:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T10:15:01Z | # Vocabulary Trimmed [lmqg/mt5-small-esquad-qg](https://huggingface.co/lmqg/mt5-small-esquad-qg): `vocabtrimmer/mt5-small-esquad-qg-trimmed-es-5000`
This model is a trimmed version of [lmqg/mt5-small-esquad-qg](https://huggingface.co/lmqg/mt5-small-esquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-esquad-qg | vocabtrimmer/mt5-small-esquad-qg-trimmed-es-5000 |
|:---------------------------|:---------------------------|:---------------------------------------------------|
| parameter_size_full | 300,165,504 | 49,185,152 |
| parameter_size_embedding | 256,103,424 | 5,123,072 |
| vocab_size | 250,101 | 5,003 |
| compression_rate_full | 100.0 | 16.39 |
| compression_rate_embedding | 100.0 | 2.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| es | vocabtrimmer/mc4_validation | text | es | validation | 5000 | 2 | |
vocabtrimmer/mt5-small-ruquad-qg-trimmed-ru-90000 | vocabtrimmer | 2023-03-15T10:34:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T10:21:59Z | # Vocabulary Trimmed [lmqg/mt5-small-ruquad-qg](https://huggingface.co/lmqg/mt5-small-ruquad-qg): `vocabtrimmer/mt5-small-ruquad-qg-trimmed-ru-90000`
This model is a trimmed version of [lmqg/mt5-small-ruquad-qg](https://huggingface.co/lmqg/mt5-small-ruquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-ruquad-qg | vocabtrimmer/mt5-small-ruquad-qg-trimmed-ru-90000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 136,224,128 |
| parameter_size_embedding | 256,103,424 | 92,162,048 |
| vocab_size | 250,101 | 90,002 |
| compression_rate_full | 100.0 | 45.38 |
| compression_rate_embedding | 100.0 | 35.99 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ru | vocabtrimmer/mc4_validation | text | ru | validation | 90000 | 2 | |
vocabtrimmer/mt5-small-itquad-qg-trimmed-it-5000 | vocabtrimmer | 2023-03-15T10:29:02Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-15T10:15:05Z | # Vocabulary Trimmed [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg): `vocabtrimmer/mt5-small-itquad-qg-trimmed-it-5000`
This model is a trimmed version of [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-itquad-qg | vocabtrimmer/mt5-small-itquad-qg-trimmed-it-5000 |
|:---------------------------|:---------------------------|:---------------------------------------------------|
| parameter_size_full | 300,165,504 | 49,185,152 |
| parameter_size_embedding | 256,103,424 | 5,123,072 |
| vocab_size | 250,101 | 5,003 |
| compression_rate_full | 100.0 | 16.39 |
| compression_rate_embedding | 100.0 | 2.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | 5000 | 2 | |
AndyPig/ppo-LunarLander-v2 | AndyPig | 2023-03-15T10:24:47Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T10:24:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 290.81 +/- 19.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dmargutierrez/distilbert-base-multilingual-cased-WNUT-ner | dmargutierrez | 2023-03-15T10:16:23Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-03-15T10:09:19Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-multilingual-cased-WNUT-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5496503496503496
- name: Recall
type: recall
value: 0.36422613531047265
- name: F1
type: f1
value: 0.4381270903010034
- name: Accuracy
type: accuracy
value: 0.9468667179618706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-WNUT-ner
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3516
- Precision: 0.5497
- Recall: 0.3642
- F1: 0.4381
- Accuracy: 0.9469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2727 | 0.6626 | 0.2530 | 0.3662 | 0.9402 |
| No log | 2.0 | 426 | 0.2636 | 0.5895 | 0.2715 | 0.3718 | 0.9429 |
| 0.1729 | 3.0 | 639 | 0.2933 | 0.5931 | 0.3040 | 0.4020 | 0.9447 |
| 0.1729 | 4.0 | 852 | 0.2861 | 0.5437 | 0.3457 | 0.4227 | 0.9453 |
| 0.0503 | 5.0 | 1065 | 0.3270 | 0.5627 | 0.3494 | 0.4311 | 0.9455 |
| 0.0503 | 6.0 | 1278 | 0.3277 | 0.5451 | 0.3531 | 0.4286 | 0.9463 |
| 0.0503 | 7.0 | 1491 | 0.3471 | 0.5828 | 0.3457 | 0.4340 | 0.9467 |
| 0.0231 | 8.0 | 1704 | 0.3594 | 0.5801 | 0.3457 | 0.4332 | 0.9464 |
| 0.0231 | 9.0 | 1917 | 0.3550 | 0.5567 | 0.3503 | 0.4300 | 0.9467 |
| 0.0121 | 10.0 | 2130 | 0.3516 | 0.5497 | 0.3642 | 0.4381 | 0.9469 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Christian90/dqn-SpaceInvadersNoFrameskip-v4 | Christian90 | 2023-03-15T10:15:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T10:13:18Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 521.50 +/- 219.83
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Christian90 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Christian90 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Christian90
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
yumingyi/dqn-SpaceInvadersNoFrameskip-v4 | yumingyi | 2023-03-15T10:11:48Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T10:11:03Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 529.00 +/- 143.68
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yumingyi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yumingyi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga yumingyi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
oyvindgrutle/amk-whisper | oyvindgrutle | 2023-03-15T10:07:20Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-01-25T11:17:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: amk-whisper
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amk-whisper
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1902
- Wer: 40.3587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 20.0 | 20 | 0.7838 | 30.9417 |
| 0.8511 | 40.0 | 40 | 1.0878 | 44.8430 |
| 0.0794 | 60.0 | 60 | 1.1466 | 39.4619 |
| 0.001 | 80.0 | 80 | 1.1872 | 39.9103 |
| 0.0004 | 100.0 | 100 | 1.1902 | 40.3587 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
joheras/roberta-base-biomedical-clinical-es-finetuned-clinais | joheras | 2023-03-15T10:06:46Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-03-15T09:50:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-clinais
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-clinais
This model is a fine-tuned version of [BSC-LT/roberta-base-biomedical-clinical-es](https://huggingface.co/BSC-LT/roberta-base-biomedical-clinical-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 171 | 1.3679 |
| 1.4311 | 2.0 | 342 | 1.2926 |
| 1.4311 | 3.0 | 513 | 1.2896 |
| 1.3363 | 4.0 | 684 | 1.3143 |
| 1.3363 | 5.0 | 855 | 1.3097 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.0
- Datasets 2.8.0
- Tokenizers 0.12.1
|
lora-library/wyt | lora-library | 2023-03-15T09:33:11Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-03-15T09:33:07Z | ---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: wangyanting
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - wyt
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "wangyanting" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: wyt




|
diffusers/ddpm-cifar10-32-demo | diffusers | 2023-03-15T09:20:32Z | 2 | 1 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"arxiv:2006.11239",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2023-03-15T09:09:10Z | ---
license: apache-2.0
tags:
- pytorch
- diffusers
- unconditional-image-generation
duplicated_from: google/ddpm-cifar10-32
---
# Denoising Diffusion Probabilistic Models (DDPM)
**Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
**Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel
**Abstract**:
*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
## Inference
**DDPM** models can use *discrete noise schedulers* such as:
- [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py)
- [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py)
- [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py)
for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest.
For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead.
See the following code:
```python
# !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "google/ddpm-cifar10-32"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm().images[0]
# save image
image.save("ddpm_generated_image.png")
```
For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)
## Training
If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
## Samples
1. 
2. 
3. 
4.  |
peterdamn/ppo-CartPole-v1 | peterdamn | 2023-03-15T09:07:19Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T09:05:05Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -134.11 +/- 65.85
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'peterdamn/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
mouss/autotrain-bikes_1-41171106189 | mouss | 2023-03-15T08:59:13Z | 39 | 0 | transformers | [
"transformers",
"pytorch",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:mouss/autotrain-data-bikes_1",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-03-15T08:58:08Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- mouss/autotrain-data-bikes_1
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.41665410499999395
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 41171106189
- CO2 Emissions (in grams): 0.4167
## Validation Metrics
- Loss: 0.368
- Accuracy: 0.818
- Precision: 0.882
- Recall: 0.789
- AUC: 0.921
- F1: 0.833 |
dvruette/oasst-pythia-12b-6000-steps | dvruette | 2023-03-15T08:48:05Z | 1,488 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-03-09T12:40:11Z | https://wandb.ai/open-assistant/supervised-finetuning/runs/qqtzt19n |
dvruette/oasst-pythia-12b-3000-steps | dvruette | 2023-03-15T08:47:44Z | 0 | 0 | null | [
"region:us"
] | null | 2023-03-09T14:36:57Z | https://wandb.ai/open-assistant/supervised-finetuning/runs/qqtzt19n |
dvruette/oasst-pythia-12b-flash-attn-5000-steps | dvruette | 2023-03-15T08:46:58Z | 1,500 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-03-12T10:42:00Z | https://wandb.ai/open-assistant/supervised-finetuning/runs/uwqcwaau |
EarthnDusk/FFXIV_Miqote_MoonKeeper_Lora | EarthnDusk | 2023-03-15T08:45:16Z | 0 | 1 | null | [
"Lycoris",
"LoHA",
"Lora",
"stable diffusion",
"text to image",
"ffxiv",
"miqote",
"en",
"dataset:Duskfallcrew/FFXIV_Data_and_Lora",
"dataset:Duskfallcrew/miqoteupdate",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-15T08:13:56Z | ---
license: creativeml-openrail-m
datasets:
- Duskfallcrew/FFXIV_Data_and_Lora
- Duskfallcrew/miqoteupdate
language:
- en
tags:
- Lycoris
- LoHA
- Lora
- stable diffusion
- text to image
- ffxiv
- miqote
---
Output udpates coming soon, we have some but if you need to see them before we put them here- we have the models up on Civit:
https://civitai.com/models/14823
Data sets listed because one is private - this was because the LoRA trainer had a subject option to upload data to here but i forgot we did it already .
Data set here: https://huggingface.co/datasets/Duskfallcrew/FFXIV_Data_and_Lora
Also noted: The MIQOTE UPDATE LoRA is a LYCORIS/LoHA and needs the special A1111 plugin: https://github.com/KohakuBlueleaf/a1111-sd-webui-locon |
amerssun/tww_result_lora | amerssun | 2023-03-15T08:43:35Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-03-15T08:38:16Z |
---
license: creativeml-openrail-m
base_model: /mnt/user/sunzhaoxu/diffusion/cilloutmix/
instance_prompt: a photo of tww 1girl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - amerssun/tww_result_lora
These are LoRA adaption weights for /mnt/user/sunzhaoxu/diffusion/cilloutmix/. The weights were trained on a photo of tww 1girl using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




|
11Anupam/demo_001 | 11Anupam | 2023-03-15T08:37:45Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"pytorch",
"tf",
"marian",
"en",
"arxiv:1910.09700",
"region:us"
] | null | 2023-03-15T07:27:17Z | ---
language:
- en
library_name: adapter-transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Anupam]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [python]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huam/dqn-Taxi-v3 | huam | 2023-03-15T08:17:36Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"Taxi-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T08:17:31Z | ---
library_name: stable-baselines3
tags:
- Taxi-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: -200.00 +/- 0.00
name: mean_reward
verified: false
---
# **DQN** Agent playing **Taxi-v3**
This is a trained model of a **DQN** agent playing **Taxi-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
giovannefeitosa/chatbot-about-pele | giovannefeitosa | 2023-03-15T08:09:22Z | 0 | 0 | sklearn | [
"sklearn",
"question-answering",
"chatbot",
"brazil",
"text2text-generation",
"en",
"dataset:en_core_web_sm",
"license:cc-by-nc-4.0",
"region:us"
] | text2text-generation | 2023-03-15T06:33:28Z | ---
language:
- en
datasets:
- en_core_web_sm
thumbnail: >-
https://huggingface.co/giovannefeitosa/chatbot-about-pele/raw/main/images/pele.jpeg
tags:
- question-answering
- chatbot
- brazil
license: cc-by-nc-4.0
pipeline_tag: text2text-generation
library_name: sklearn
---
# Chatbot about Pele
This is demo project.
> library_name: sklearn |
kailashsp/dreambooth_diffusion_model | kailashsp | 2023-03-15T08:08:00Z | 2 | 0 | keras | [
"keras",
"tf-keras",
"text-to-image",
"dataset:kailashsp/class-images",
"license:apache-2.0",
"region:us"
] | text-to-image | 2023-03-15T07:54:09Z | ---
library_name: keras
license: apache-2.0
datasets:
- kailashsp/class-images
pipeline_tag: text-to-image
---
## Model description
This is a Stable Diffusion model fine-tuned using Dreambooth on pokemon
to get cuter pokemons
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| inner_optimizer.class_name | Custom>RMSprop |
| inner_optimizer.config.name | RMSprop |
| inner_optimizer.config.weight_decay | None |
| inner_optimizer.config.clipnorm | None |
| inner_optimizer.config.global_clipnorm | None |
| inner_optimizer.config.clipvalue | None |
| inner_optimizer.config.use_ema | False |
| inner_optimizer.config.ema_momentum | 0.99 |
| inner_optimizer.config.ema_overwrite_frequency | 100 |
| inner_optimizer.config.jit_compile | True |
| inner_optimizer.config.is_legacy_optimizer | False |
| inner_optimizer.config.learning_rate | 0.0010000000474974513 |
| inner_optimizer.config.rho | 0.9 |
| inner_optimizer.config.momentum | 0.0 |
| inner_optimizer.config.epsilon | 1e-07 |
| inner_optimizer.config.centered | False |
| dynamic | True |
| initial_scale | 32768.0 |
| dynamic_growth_steps | 2000 |
| training_precision | mixed_float16 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
lyeonii/bert-small | lyeonii | 2023-03-15T08:07:54Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"en",
"arxiv:1908.08962",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-03-15T08:05:48Z | ---
license: mit
language:
- en
---
# BERT-Small (uncased)
This is one of 24 smaller BERT models (English only, uncased, trained with WordPiece masking)
released by [google-research/bert](https://github.com/google-research/bert).
These BERT models was released as TensorFlow checkpoints, however, this is the converted version to PyTorch.
More information can be found in [google-research/bert](https://github.com/google-research/bert) or [lyeoni/convert-tf-to-pytorch](https://github.com/lyeoni/convert-tf-to-pytorch).
## Evaluation
Here are the evaluation scores (F1/Accuracy) for the MPRC task.
|Model|MRPC|
|-|:-:|
|BERT-Tiny|81.22/68.38|
|BERT-Mini|81.43/69.36|
|BERT-Small|81.41/70.34|
|BERT-Medium|83.33/73.53|
|BERT-Base|85.62/78.19|
### References
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
``` |
lyeonii/bert-medium | lyeonii | 2023-03-15T08:04:10Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"en",
"arxiv:1908.08962",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-03-15T08:01:26Z | ---
license: mit
language:
- en
---
# BERT-Medium (uncased)
This is one of 24 smaller BERT models (English only, uncased, trained with WordPiece masking)
released by [google-research/bert](https://github.com/google-research/bert).
These BERT models was released as TensorFlow checkpoints, however, this is the converted version to PyTorch.
More information can be found in [google-research/bert](https://github.com/google-research/bert) or [lyeoni/convert-tf-to-pytorch](https://github.com/lyeoni/convert-tf-to-pytorch).
## Evaluation
Here are the evaluation scores (F1/Accuracy) for the MPRC task.
|Model|MRPC|
|-|:-:|
|BERT-Tiny|81.22/68.38|
|BERT-Mini|81.43/69.36|
|BERT-Small|81.41/70.34|
|BERT-Medium|83.33/73.53|
|BERT-Base|85.62/78.19|
### References
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
``` |
auditi41/wav2vec2-large-xlsr-turkish | auditi41 | 2023-03-15T08:03:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-03-14T07:30:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-turkish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: tr
split: train+validation
args: tr
metrics:
- name: Wer
type: wer
value: 0.48268818302522726
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-turkish
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4242
- Wer: 0.4827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0376 | 4.26 | 400 | 2.3690 | 1.0020 |
| 0.7983 | 8.51 | 800 | 0.4755 | 0.6328 |
| 0.3157 | 12.77 | 1200 | 0.4051 | 0.5408 |
| 0.2197 | 17.02 | 1600 | 0.4156 | 0.5149 |
| 0.1643 | 21.28 | 2000 | 0.4286 | 0.5036 |
| 0.1305 | 25.53 | 2400 | 0.4247 | 0.4908 |
| 0.1178 | 29.79 | 2800 | 0.4242 | 0.4827 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
lyeonii/bert-mini | lyeonii | 2023-03-15T07:57:26Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"en",
"arxiv:1908.08962",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-03-15T07:52:52Z | ---
license: mit
language:
- en
---
# BERT-Mini (uncased)
This is one of 24 smaller BERT models (English only, uncased, trained with WordPiece masking)
released by [google-research/bert](https://github.com/google-research/bert).
These BERT models was released as TensorFlow checkpoints, however, this is the converted version to PyTorch.
More information can be found in [google-research/bert](https://github.com/google-research/bert) or [lyeoni/convert-tf-to-pytorch](https://github.com/lyeoni/convert-tf-to-pytorch).
## Evaluation
Here are the evaluation scores (F1/Accuracy) for the MPRC task.
|Model|MRPC|
|-|:-:|
|BERT-Tiny|81.22/68.38|
|BERT-Mini|81.43/69.36|
|BERT-Small|81.41/70.34|
|BERT-Medium|83.33/73.53|
|BERT-Base|85.62/78.19|
### References
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
``` |
Perse90/ppo-Huggy | Perse90 | 2023-03-15T07:53:56Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-03-15T07:53:50Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Perse90/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
thang123/wav2vec2-tiengviet1 | thang123 | 2023-03-15T06:59:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-03-15T04:46:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-tiengviet1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-tiengviet1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7302
- Wer: 1.0118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 300
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 14.8304 | 39.67 | 40 | 4.3168 | 1.0 |
| 5.5892 | 79.67 | 80 | 3.5454 | 1.0 |
| 5.2113 | 119.67 | 120 | 3.4845 | 1.0 |
| 4.9995 | 159.67 | 160 | 3.5783 | 1.0 |
| 4.7958 | 199.67 | 200 | 3.1850 | 1.0 |
| 4.4776 | 239.67 | 240 | 2.9864 | 1.0 |
| 4.2546 | 279.67 | 280 | 2.7302 | 1.0118 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Madronus/MultiLabel_V3 | Madronus | 2023-03-15T06:52:53Z | 67 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-03-14T22:02:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: MultiLabel_V3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MultiLabel_V3
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9683
- Accuracy: 0.7370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8572 | 0.1 | 100 | 1.1607 | 0.6466 |
| 0.8578 | 0.2 | 200 | 1.1956 | 0.6499 |
| 0.7362 | 0.3 | 300 | 1.1235 | 0.6885 |
| 0.8569 | 0.39 | 400 | 1.0460 | 0.6891 |
| 0.4851 | 0.49 | 500 | 1.1213 | 0.6891 |
| 0.7252 | 0.59 | 600 | 1.1512 | 0.6720 |
| 0.6333 | 0.69 | 700 | 1.1039 | 0.6913 |
| 0.6239 | 0.79 | 800 | 1.0636 | 0.7001 |
| 0.2768 | 0.89 | 900 | 1.0386 | 0.7073 |
| 0.4872 | 0.99 | 1000 | 1.0311 | 0.7062 |
| 0.3049 | 1.09 | 1100 | 1.0437 | 0.7155 |
| 0.1435 | 1.18 | 1200 | 1.0343 | 0.7222 |
| 0.2088 | 1.28 | 1300 | 1.0784 | 0.7194 |
| 0.4972 | 1.38 | 1400 | 1.1072 | 0.7166 |
| 0.3604 | 1.48 | 1500 | 1.0438 | 0.7150 |
| 0.2726 | 1.58 | 1600 | 1.0077 | 0.7293 |
| 0.3106 | 1.68 | 1700 | 1.0029 | 0.7326 |
| 0.3259 | 1.78 | 1800 | 0.9906 | 0.7310 |
| 0.3323 | 1.88 | 1900 | 0.9729 | 0.7359 |
| 0.2998 | 1.97 | 2000 | 0.9683 | 0.7370 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
MikolajDeja/facebook-nllb-200-distilled-600M-en-pl-3-para_crawl-finetune | MikolajDeja | 2023-03-15T06:39:46Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"dataset:para_crawl",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-03-10T13:34:55Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
datasets:
- para_crawl
model-index:
- name: facebook-nllb-200-distilled-600M-en-pl-3-para_crawl-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# facebook-nllb-200-distilled-600M-en-pl-3-para_crawl-finetune
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the para_crawl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
kuma-s/xlm-roberta-base-finetuned-panx-de | kuma-s | 2023-03-15T06:39:31Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-03-15T06:22:34Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8638300289723342
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1358
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2591 | 1.0 | 525 | 0.1621 | 0.8206 |
| 0.1276 | 2.0 | 1050 | 0.1379 | 0.8486 |
| 0.082 | 3.0 | 1575 | 0.1358 | 0.8638 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
jmbt22/marian-finetuned-opus-mt-en-tl | jmbt22 | 2023-03-15T06:06:16Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:tatoeba",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-03-15T05:50:58Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- tatoeba
metrics:
- bleu
model-index:
- name: marian-finetuned-opus-mt-en-tl
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: tatoeba
type: tatoeba
config: en-tl
split: train
args: en-tl
metrics:
- name: Bleu
type: bleu
value: 35.9113771495936
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-opus-mt-en-tl
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-tl](https://huggingface.co/Helsinki-NLP/opus-mt-en-tl) on the tatoeba dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2611
- Bleu: 35.9114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
rm1768/wav2vec2-large-xlsr-turkish-demo-colab | rm1768 | 2023-03-15T06:04:19Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-03-10T08:07:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-turkish-demo-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 0.4821775099581248
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-turkish-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4151
- Wer: 0.4822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2487 | 4.26 | 400 | 1.6455 | 1.0778 |
| 0.71 | 8.51 | 800 | 0.4428 | 0.6138 |
| 0.3073 | 12.77 | 1200 | 0.4214 | 0.5517 |
| 0.2136 | 17.02 | 1600 | 0.4345 | 0.5193 |
| 0.1624 | 21.28 | 2000 | 0.4366 | 0.5026 |
| 0.1298 | 25.53 | 2400 | 0.4111 | 0.4949 |
| 0.1174 | 29.79 | 2800 | 0.4151 | 0.4822 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
ashishj20/q-FrozenLake-v1-4x4-noslippery | ashishj20 | 2023-03-15T05:39:52Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T05:35:43Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noslippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ashishj20/q-FrozenLake-v1-4x4-noslippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
edgemac20/q-FrozenLake-v1-4x4-noSlippery | edgemac20 | 2023-03-15T04:53:27Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T04:53:22Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="edgemac20/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ajankelo/ppo-LunarLander-v2 | ajankelo | 2023-03-15T04:44:08Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-15T04:43:50Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.48 +/- 19.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NurFathihaTahiatSeeum/Fake-news-detection | NurFathihaTahiatSeeum | 2023-03-15T04:37:15Z | 0 | 0 | null | [
"Natural Language Processing",
"text-classification",
"en",
"dataset:fake_news_english",
"region:us"
] | text-classification | 2023-03-14T10:18:22Z | ---
datasets:
- fake_news_english
language:
- en
pipeline_tag: text-classification
tags:
- Natural Language Processing
---
This Natural Language Processing (NLP) work is done in google colab using Bidirectional Encoder Representations from Transformers (BERT) model.
Dataset: https://www.kaggle.com/datasets/sadikaljarif/fake-news-detection-dataset-english |
eduiqe/Pixelicopter | eduiqe | 2023-03-15T03:52:07Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-05T07:06:13Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelicopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.30 +/- 14.56
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
marcospiau/finetuned_minilm | marcospiau | 2023-03-15T03:35:50Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-03-15T03:35:17Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned_minilm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_minilm
This model is a fine-tuned version of [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6736
- Accuracy: 0.9023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5371 | 1.0 | 619 | 0.2941 | 0.8782 |
| 0.2763 | 2.0 | 1238 | 0.2590 | 0.8986 |
| 0.1899 | 3.0 | 1857 | 0.3081 | 0.8959 |
| 0.1257 | 4.0 | 2476 | 0.2576 | 0.9177 |
| 0.0929 | 5.0 | 3095 | 0.3949 | 0.9059 |
| 0.0806 | 6.0 | 3714 | 0.3304 | 0.9173 |
| 0.0629 | 7.0 | 4333 | 0.4214 | 0.9073 |
| 0.0474 | 8.0 | 4952 | 0.4625 | 0.9145 |
| 0.0498 | 9.0 | 5571 | 0.4227 | 0.9236 |
| 0.049 | 10.0 | 6190 | 0.5549 | 0.8945 |
| 0.0411 | 11.0 | 6809 | 0.3340 | 0.9341 |
| 0.0272 | 12.0 | 7428 | 0.3317 | 0.9291 |
| 0.0264 | 13.0 | 8047 | 0.4099 | 0.9305 |
| 0.0279 | 14.0 | 8666 | 0.4092 | 0.9268 |
| 0.0242 | 15.0 | 9285 | 0.4418 | 0.9318 |
| 0.0241 | 16.0 | 9904 | 0.4352 | 0.9273 |
| 0.0238 | 17.0 | 10523 | 0.5306 | 0.9259 |
| 0.0216 | 18.0 | 11142 | 0.4267 | 0.9241 |
| 0.0166 | 19.0 | 11761 | 0.5134 | 0.9255 |
| 0.0182 | 20.0 | 12380 | 0.6736 | 0.9023 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
yz54321/wintermoonmix | yz54321 | 2023-03-15T03:09:35Z | 0 | 2 | null | [
"region:us"
] | null | 2023-03-13T08:21:47Z | WinterMoonMix:
https://civitai.com/models/12433/wintermoonmix
LulubearMix:
https://civitai.com/models/18934/lulubearmix |
mipin5/OliHye2 | mipin5 | 2023-03-15T03:02:46Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-15T02:58:37Z | ---
license: creativeml-openrail-m
---
|
Eduardo84/Xx1 | Eduardo84 | 2023-03-15T02:55:52Z | 0 | 0 | null | [
"license:bsd-3-clause-clear",
"region:us"
] | null | 2023-03-15T02:55:51Z | ---
license: bsd-3-clause-clear
---
|
coreml-community/coreml-seek.art_MEGA | coreml-community | 2023-03-15T02:52:28Z | 0 | 2 | null | [
"coreml",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-01-30T00:51:18Z | ---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br>
- Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
- `original` version is only compatible with CPU & GPU option.<br>
- Custom resolution versions are tagged accordingly.<br>
- `vae` tagged files have a vae embedded into the model.<br>
- Descriptions are posted as-is from original model source. Not all features and/or results may be available in CoreML format.<br>
- Some of the models were converted with `vae-encoder` for i2i.
- Models that are 32 bit will have "fp32" in the filename.
# Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
# seek.art MEGA:
Source(s): [Hugging Face](https://huggingface.co/coreco/seek.art_MEGA) - [CivitAI](https://civitai.com/models/1315/seekart-mega)
# Seek.art MEGA is a general use "anything" model that significantly improves on 1.5 across dozens of styles. Created by Coreco at [seek.art](https://seek.art/)
This model was trained on nearly 10k high-quality public domain digital artworks with the goal of improving output quality across the board. We find the model to be highly flexible in its ability to mix various styles, subjects, and details. We recommend resolutions above 640px in one or both dimensions for best results.
You can try this model and several others for free at [seek.art](https://seek.art/).
We also recommend an inference tool supporting prompt weighting and high resolution optimization / fixing for best results. We suggest [InvokeAI](https://github.com/invoke-ai/InvokeAI) as a sensibly licensed and fully featured open-source inference tool.
### Examples
<img src="https://huggingface.co/coreco/seek.art_MEGA/resolve/main/examples.png" style="max-width: 800px;" width="100%"/>
The above example images including the prompts and all relevant settings are available [here](https://seek.art/explore/search?collection=6112a64d-bd8b-4043-8d96-88c7cfa65c43).
Additionally, search thousands of high quality prompts on [seek.art](https://seek.art/) for free.
### License - This model carries a commercial restricted sub-license, please read carefully:
[License](https://huggingface.co/coreco/seek.art_MEGA/blob/main/LICENSE.txt)
### Use Restrictions
You agree not to use the Model or Derivatives of the Model:
- for the commercial purpose of hosted content generation (inference) without the express written permission of seek.art. Model output for personal use carries no such commercial restriction.
- In any way that violates any applicable national, federal, state, local or international law or regulation;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate personal identifiable information that can be used to harm an individual;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories;
- To provide medical advice and medical results interpretation;
- To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use). |
baptiste-pasquier/distilcamembert-allocine | baptiste-pasquier | 2023-03-15T02:42:04Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"fr",
"dataset:allocine",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-02-13T00:36:42Z | ---
language:
- fr
license: mit
tags:
- generated_from_trainer
datasets:
- allocine
widget:
- text: "Un film magnifique avec un duo d'acteurs excellent."
- text: "Grosse déception pour ce thriller qui peine à convaincre."
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilcamembert-allocine
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: allocine
type: allocine
config: allocine
split: validation
args: allocine
metrics:
- name: Accuracy
type: accuracy
value: 0.9714
- name: F1
type: f1
value: 0.9709909727152854
- name: Precision
type: precision
value: 0.9648256399919372
- name: Recall
type: recall
value: 0.9772356063699469
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilcamembert-allocine
This model is a fine-tuned version of [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) on the allocine dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1066
- Accuracy: 0.9714
- F1: 0.9710
- Precision: 0.9648
- Recall: 0.9772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
| :-----------: | :---: | :---: | :-------------: | :------: | :----: | :-------: | :----: |
| 0.1504 | 0.2 | 500 | 0.1290 | 0.9555 | 0.9542 | 0.9614 | 0.9470 |
| 0.1334 | 0.4 | 1000 | 0.1049 | 0.9624 | 0.9619 | 0.9536 | 0.9703 |
| 0.1158 | 0.6 | 1500 | 0.1052 | 0.963 | 0.9627 | 0.9498 | 0.9760 |
| 0.1153 | 0.8 | 2000 | 0.0949 | 0.9661 | 0.9653 | 0.9686 | 0.9620 |
| 0.1053 | 1.0 | 2500 | 0.0936 | 0.9666 | 0.9663 | 0.9542 | 0.9788 |
| 0.0755 | 1.2 | 3000 | 0.0987 | 0.97 | 0.9695 | 0.9644 | 0.9748 |
| 0.0716 | 1.4 | 3500 | 0.1078 | 0.9688 | 0.9684 | 0.9598 | 0.9772 |
| 0.0688 | 1.6 | 4000 | 0.1051 | 0.9673 | 0.9670 | 0.9552 | 0.9792 |
| 0.0691 | 1.8 | 4500 | 0.0940 | 0.9709 | 0.9704 | 0.9688 | 0.9720 |
| 0.0733 | 2.0 | 5000 | 0.1038 | 0.9686 | 0.9683 | 0.9558 | 0.9812 |
| 0.0476 | 2.2 | 5500 | 0.1066 | 0.9714 | 0.9710 | 0.9648 | 0.9772 |
| 0.047 | 2.4 | 6000 | 0.1098 | 0.9689 | 0.9686 | 0.9587 | 0.9788 |
| 0.0431 | 2.6 | 6500 | 0.1110 | 0.9711 | 0.9706 | 0.9666 | 0.9747 |
| 0.0464 | 2.8 | 7000 | 0.1149 | 0.9697 | 0.9694 | 0.9592 | 0.9798 |
| 0.0342 | 3.0 | 7500 | 0.1122 | 0.9703 | 0.9699 | 0.9621 | 0.9778 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
thang123/wav2vec2-large-xlsr-turkish-demo-colab | thang123 | 2023-03-15T02:21:05Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-03-13T09:27:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-turkish-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-turkish-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
brunonishimoto/q-learning-Taxi-v3 | brunonishimoto | 2023-03-15T01:57:13Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-03-14T00:10:33Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="brunonishimoto/q-learning-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Xit1/gpt-1 | Xit1 | 2023-03-15T01:46:35Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"en",
"ur",
"tl",
"dataset:fka/awesome-chatgpt-prompts",
"license:afl-3.0",
"region:us"
] | null | 2023-03-14T18:07:55Z | ---
license: afl-3.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
- ur
- tl
metrics:
- accuracy
library_name: adapter-transformers
--- |
peteli/hometown | peteli | 2023-03-15T01:29:02Z | 0 | 1 | diffusers | [
"diffusers",
"paddlepaddle",
"stable-diffusion",
"stable-diffusion-ppdiffusers",
"text-to-image",
"ppdiffusers",
"lora",
"en",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-03-15T00:48:39Z | ---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a sea of lavender and gold flowers in the world of fairy tales is my hometown
tags:
- stable-diffusion
- stable-diffusion-ppdiffusers
- text-to-image
- ppdiffusers
- lora
inference: false
language:
- en
library_name: diffusers
---
# LoRA DreamBooth - peteli/hometown
本仓库的 LoRA 权重是基于 runwayml/stable-diffusion-v1-5 训练而来的,我们采用[DreamBooth](https://dreambooth.github.io/)的技术并使用 a sea of lavender and gold flowers in the world of fairy tales is my hometown 文本进行了训练。 |
kmcgrath/sd-controlnet-canny-fork | kmcgrath | 2023-03-15T01:16:06Z | 15 | 0 | diffusers | [
"diffusers",
"pytorch",
"safetensors",
"art",
"controlnet",
"stable-diffusion",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:openrail",
"endpoints_compatible",
"diffusers:StableDiffusionControlNetPipeline",
"region:us"
] | text-to-image | 2023-03-09T19:39:54Z | ---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
tags:
- art
- controlnet
- stable-diffusion
---
# Controlnet - *Canny Version*
ControlNet is a neural network structure to control diffusion models by adding extra conditions.
This checkpoint corresponds to the ControlNet conditioned on **Canny edges**.
It can be used in combination with [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/text2img).

## Model Details
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
- **Cite as:**
@misc{zhang2023adding,
title={Adding Conditional Control to Text-to-Image Diffusion Models},
author={Lvmin Zhang and Maneesh Agrawala},
year={2023},
eprint={2302.05543},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Introduction
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
Lvmin Zhang, Maneesh Agrawala.
The abstract reads as follows:
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
This may enrich the methods to control large diffusion models and further facilitate related applications.*
## Released Checkpoints
The authors released 8 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
on a different type of conditioning:
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|---|---|---|---|
|[lllyasviel/sd-controlnet-canny](https://huggingface.co/lllyasviel/sd-controlnet-canny)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_canny.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"/></a>|
|[lllyasviel/sd-controlnet-depth](https://huggingface.co/lllyasviel/sd-controlnet-depth)<br/> *Trained with Midas depth estimation* |A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_depth.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_depth.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"/></a>|
|[lllyasviel/sd-controlnet-hed](https://huggingface.co/lllyasviel/sd-controlnet-hed)<br/> *Trained with HED edge detection (soft edge)* |A monochrome image with white soft edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_hed.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_hed.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"/></a> |
|[lllyasviel/sd-controlnet-mlsd](https://huggingface.co/lllyasviel/sd-controlnet-mlsd)<br/> *Trained with M-LSD line detection* |A monochrome image composed only of white straight lines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_mlsd.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_mlsd.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"/></a>|
|[lllyasviel/sd-controlnet-normal](https://huggingface.co/lllyasviel/sd-controlnet-normal)<br/> *Trained with normal map* |A [normal mapped](https://en.wikipedia.org/wiki/Normal_mapping) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_normal.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_normal.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"/></a>|
|[lllyasviel/sd-controlnet_openpose](https://huggingface.co/lllyasviel/sd-controlnet-openpose)<br/> *Trained with OpenPose bone image* |A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_openpose.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"/></a>|
|[lllyasviel/sd-controlnet_scribble](https://huggingface.co/lllyasviel/sd-controlnet-scribble)<br/> *Trained with human scribbles* |A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_scribble.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_scribble.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"/></a> |
|[lllyasviel/sd-controlnet_seg](https://huggingface.co/lllyasviel/sd-controlnet-seg)<br/>*Trained with semantic segmentation* |An [ADE20K](https://groups.csail.mit.edu/vision/datasets/ADE20K/)'s segmentation protocol image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_seg.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_seg.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"/></a> |
## Example
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
has been trained on it.
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
**Note**: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below:
1. Install opencv
```sh
$ pip install opencv-contrib-python
```
2. Let's install `diffusers` and related packages:
```
$ pip install diffusers transformers git+https://github.com/huggingface/accelerate.git
```
3. Run code:
```python
import cv2
from PIL import Image
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
import torch
import numpy as np
from diffusers.utils import load_image
image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-hed/resolve/main/images/bird.png")
image = np.array(image)
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
image = Image.fromarray(image)
controlnet = ControlNetModel.from_pretrained(
"fusing/stable-diffusion-v1-5-controlnet-canny", torch_dtype=torch.float16
)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
# Remove if you do not have xformers installed
# see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers
# for installation instructions
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
image = pipe("bird", image, num_inference_steps=20).images[0]
image.save('images/bird_canny_out.png')
```



### Training
The canny edge model was trained on 3M edge-image, caption pairs. The model was trained for 600 GPU-hours with Nvidia A100 80G using Stable Diffusion 1.5 as a base model.
### Blog post
For more information, please also have a look at the [official ControlNet Blog Post](https://huggingface.co/blog/controlnet). |
Subsets and Splits