modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 12:27:51
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 520
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 12:25:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Terps/a2c-PandaReachDense-v3 | Terps | 2023-09-13T20:22:07Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-13T20:12:20Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JuanPH/Edgar-Allan-Poe | JuanPH | 2023-09-13T20:16:55Z | 148 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:DeepESP/gpt2-spanish",
"base_model:finetune:DeepESP/gpt2-spanish",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-13T20:16:34Z | ---
license: mit
base_model: DeepESP/gpt2-spanish
tags:
- generated_from_trainer
model-index:
- name: Edgar-Allan-Poe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Edgar-Allan-Poe
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2008 | 1.0 | 1500 | 1.1911 |
| 1.1894 | 2.0 | 3000 | 1.1652 |
| 1.1036 | 3.0 | 4500 | 1.1545 |
| 1.0117 | 4.0 | 6000 | 1.1507 |
| 0.9543 | 5.0 | 7500 | 1.1515 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
dyvanoff/Referencias-de-Vinos | dyvanoff | 2023-09-13T20:16:10Z | 147 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:DeepESP/gpt2-spanish",
"base_model:finetune:DeepESP/gpt2-spanish",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-13T20:15:48Z | ---
license: mit
base_model: DeepESP/gpt2-spanish
tags:
- generated_from_trainer
model-index:
- name: Referencias-de-Vinos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Referencias-de-Vinos
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.9201 | 1.0 | 3000 | 2.0543 |
| 1.7335 | 2.0 | 6000 | 1.9645 |
| 1.6038 | 3.0 | 9000 | 1.9353 |
| 1.9135 | 4.0 | 12000 | 1.9310 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
xEricCardozo/Martin-Fierro | xEricCardozo | 2023-09-13T20:15:48Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:DeepESP/gpt2-spanish",
"base_model:finetune:DeepESP/gpt2-spanish",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-13T20:15:33Z | ---
license: mit
base_model: DeepESP/gpt2-spanish
tags:
- generated_from_trainer
model-index:
- name: Martin-Fierro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Martin-Fierro
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.109 | 1.0 | 40 | 4.9585 |
| 4.6404 | 2.0 | 80 | 4.8648 |
| 4.3806 | 3.0 | 120 | 4.8268 |
| 4.2234 | 4.0 | 160 | 4.8176 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/leaf_pokemon | CyberHarem | 2023-09-13T20:14:29Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/leaf_pokemon",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-13T20:00:03Z | ---
license: mit
datasets:
- CyberHarem/leaf_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of leaf_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6240, you need to download `6240/leaf_pokemon.pt` as the embedding and `6240/leaf_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6240**, with the score of 0.870. The trigger words are:
1. `leaf_pokemon`
2. `long_hair, brown_hair, hat, brown_eyes, breasts, smile, blush`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7200 | 0.832 | [Download](7200/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](7200/previews/pattern_6.png) | [<NSFW, click to see>](7200/previews/pattern_7.png) |  | [<NSFW, click to see>](7200/previews/bondage.png) | [<NSFW, click to see>](7200/previews/free.png) |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| 6720 | 0.806 | [Download](6720/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](6720/previews/pattern_6.png) | [<NSFW, click to see>](6720/previews/pattern_7.png) |  | [<NSFW, click to see>](6720/previews/bondage.png) | [<NSFW, click to see>](6720/previews/free.png) |  |  | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) |  |  |
| **6240** | **0.870** | [**Download**](6240/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](6240/previews/pattern_6.png) | [<NSFW, click to see>](6240/previews/pattern_7.png) |  | [<NSFW, click to see>](6240/previews/bondage.png) | [<NSFW, click to see>](6240/previews/free.png) |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5760 | 0.772 | [Download](5760/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](5760/previews/pattern_6.png) | [<NSFW, click to see>](5760/previews/pattern_7.png) |  | [<NSFW, click to see>](5760/previews/bondage.png) | [<NSFW, click to see>](5760/previews/free.png) |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5280 | 0.757 | [Download](5280/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](5280/previews/pattern_6.png) | [<NSFW, click to see>](5280/previews/pattern_7.png) |  | [<NSFW, click to see>](5280/previews/bondage.png) | [<NSFW, click to see>](5280/previews/free.png) |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4800 | 0.764 | [Download](4800/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](4800/previews/pattern_6.png) | [<NSFW, click to see>](4800/previews/pattern_7.png) |  | [<NSFW, click to see>](4800/previews/bondage.png) | [<NSFW, click to see>](4800/previews/free.png) |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4320 | 0.764 | [Download](4320/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](4320/previews/pattern_6.png) | [<NSFW, click to see>](4320/previews/pattern_7.png) |  | [<NSFW, click to see>](4320/previews/bondage.png) | [<NSFW, click to see>](4320/previews/free.png) |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3840 | 0.640 | [Download](3840/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](3840/previews/pattern_6.png) | [<NSFW, click to see>](3840/previews/pattern_7.png) |  | [<NSFW, click to see>](3840/previews/bondage.png) | [<NSFW, click to see>](3840/previews/free.png) |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3360 | 0.726 | [Download](3360/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](3360/previews/pattern_6.png) | [<NSFW, click to see>](3360/previews/pattern_7.png) |  | [<NSFW, click to see>](3360/previews/bondage.png) | [<NSFW, click to see>](3360/previews/free.png) |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| 2880 | 0.765 | [Download](2880/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](2880/previews/pattern_6.png) | [<NSFW, click to see>](2880/previews/pattern_7.png) |  | [<NSFW, click to see>](2880/previews/bondage.png) | [<NSFW, click to see>](2880/previews/free.png) |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2400 | 0.443 | [Download](2400/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](2400/previews/pattern_6.png) | [<NSFW, click to see>](2400/previews/pattern_7.png) |  | [<NSFW, click to see>](2400/previews/bondage.png) | [<NSFW, click to see>](2400/previews/free.png) |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1920 | 0.507 | [Download](1920/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](1920/previews/pattern_6.png) | [<NSFW, click to see>](1920/previews/pattern_7.png) |  | [<NSFW, click to see>](1920/previews/bondage.png) | [<NSFW, click to see>](1920/previews/free.png) |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1440 | 0.541 | [Download](1440/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](1440/previews/pattern_6.png) | [<NSFW, click to see>](1440/previews/pattern_7.png) |  | [<NSFW, click to see>](1440/previews/bondage.png) | [<NSFW, click to see>](1440/previews/free.png) |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 960 | 0.397 | [Download](960/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](960/previews/pattern_6.png) | [<NSFW, click to see>](960/previews/pattern_7.png) |  | [<NSFW, click to see>](960/previews/bondage.png) | [<NSFW, click to see>](960/previews/free.png) |  |  | [<NSFW, click to see>](960/previews/nude.png) | [<NSFW, click to see>](960/previews/nude2.png) |  |  |
| 480 | 0.440 | [Download](480/leaf_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](480/previews/pattern_6.png) | [<NSFW, click to see>](480/previews/pattern_7.png) |  | [<NSFW, click to see>](480/previews/bondage.png) | [<NSFW, click to see>](480/previews/free.png) |  |  | [<NSFW, click to see>](480/previews/nude.png) | [<NSFW, click to see>](480/previews/nude2.png) |  |  |
|
davera-017/ppo-LunarLander-v3 | davera-017 | 2023-09-13T20:14:22Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-13T20:14:00Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 235.76 +/- 16.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
javier-rooster/Martin-Fierro | javier-rooster | 2023-09-13T20:11:57Z | 148 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:DeepESP/gpt2-spanish",
"base_model:finetune:DeepESP/gpt2-spanish",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-13T20:10:54Z | ---
license: mit
base_model: DeepESP/gpt2-spanish
tags:
- generated_from_trainer
model-index:
- name: Martin-Fierro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Martin-Fierro
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9891 | 1.0 | 40 | 3.7897 |
| 3.4333 | 2.0 | 80 | 3.6631 |
| 3.25 | 3.0 | 120 | 3.6085 |
| 3.0732 | 4.0 | 160 | 3.5934 |
| 2.7839 | 5.0 | 200 | 3.5859 |
| 2.7496 | 6.0 | 240 | 3.5977 |
| 2.5405 | 7.0 | 280 | 3.6088 |
| 2.259 | 8.0 | 320 | 3.6209 |
| 2.3266 | 9.0 | 360 | 3.6279 |
| 2.1343 | 10.0 | 400 | 3.6405 |
| 2.1211 | 11.0 | 440 | 3.6489 |
| 1.9877 | 12.0 | 480 | 3.6576 |
| 1.9446 | 13.0 | 520 | 3.6644 |
| 1.9366 | 14.0 | 560 | 3.6667 |
| 1.9959 | 15.0 | 600 | 3.6687 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
campici0/Martin-Fierro | campici0 | 2023-09-13T20:11:47Z | 147 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:DeepESP/gpt2-spanish",
"base_model:finetune:DeepESP/gpt2-spanish",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-13T20:11:33Z | ---
license: mit
base_model: DeepESP/gpt2-spanish
tags:
- generated_from_trainer
model-index:
- name: Martin-Fierro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Martin-Fierro
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8224 | 1.0 | 40 | 2.6301 |
| 2.4273 | 2.0 | 80 | 2.5539 |
| 2.3424 | 3.0 | 120 | 2.5276 |
| 2.3058 | 4.0 | 160 | 2.5215 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jondurbin/airoboros-l2-70b-2.2 | jondurbin | 2023-09-13T20:09:15Z | 14 | 5 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-2.2",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-13T15:02:26Z | ---
license: llama2
datasets:
- jondurbin/airoboros-2.2
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
Highlights:
- The prompt format has changed! It is now newlines instead of spaces between system/USER/ASSISTANT (see prompt info below).
- "Clean" version of airoboros-2.2 dataset -- this model __does not__ contain the de-alignment data.
- For an uncensored version, use spicyboros variant: https://hf.co/jondurbin/spicyboros-70b-2.2
- I re-generated all of the outputs in the dataset that had "Once upon a time" so they'd be less cliche - no guarantees that won't still happen, but in theory it may happen less.
- More multiple choice, better awareness, some alignment for normal use case but system-prompt overridable etc.
Breakdown of the training data:
| Count | Category |
|-------|----------------------------|
| 36 | experience |
| 60 | quiz |
| 63 | card |
| 76 | greeting |
| 100 | detailed\_writing |
| 200 | song |
| 204 | editor |
| 207 | counterfactual\_contextual |
| 268 | cot |
| 339 | theory\_of\_mind |
| 416 | awareness |
| 439 | stylized\_response |
| 457 | misconception |
| 500 | summarization |
| 620 | riddle |
| 719 | agent |
| 800 | plan |
| 873 | gtkm |
| 963 | rp |
| 1000 | wordgame |
| 1279 | multiple\_choice |
| 1519 | joke |
| 1758 | writing |
| 2152 | contextual |
| 2183 | trivia |
| 2364 | roleplay |
| 4699 | general |
| 5775 | coding |
| 11366 | orca |
In other words, it's a fairly general purpose model, but focuses fairly heavily on instruction response pairs rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format:
```
A chat.
USER: {prompt}
ASSISTANT:
```
The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc.
Here's another example:
```
A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously.
USER: {prompt}
ASSISTANT:
```
And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf).
```
A chat between old friends: Timmy and Tommy.
{description of characters}
{setting for the chat}
Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee."
Tommy:
```
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__
### Fine tuning info
https://gist.github.com/jondurbin/eda7c4dc9e4459952b47eafb9e4056b2
Earlier checkpoints of adapter model here: https://huggingface.co/jondurbin/airoboros-l2-70b-2.2-checkpoints
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 2.2 models are built on top of llama-2/codellama.
The llama-2 base model has a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
loupzeur/Reinforce-unit4 | loupzeur | 2023-09-13T20:07:11Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-13T19:55:21Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-unit4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
winglian/phi-1_5-2x-fib | winglian | 2023-09-13T20:05:39Z | 14 | 2 | transformers | [
"transformers",
"pytorch",
"mixformer-sequential",
"text-generation",
"custom_code",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-09-13T19:15:22Z | # 2.6B Phi
This model was created by duplicating hidden layers from Microsoft's Phi 1.5.
Join us on the OpenAccess AI Collective Discord: https://discord.gg/jb763J4Q |
CyberHarem/illya_coral_tenseioujototensaireijounomahoukakumei | CyberHarem | 2023-09-13T20:05:15Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/illya_coral_tenseioujototensaireijounomahoukakumei",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-13T19:49:59Z | ---
license: mit
datasets:
- CyberHarem/illya_coral_tenseioujototensaireijounomahoukakumei
pipeline_tag: text-to-image
tags:
- art
---
# Lora of illya_coral_tenseioujototensaireijounomahoukakumei
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4420, you need to download `4420/illya_coral_tenseioujototensaireijounomahoukakumei.pt` as the embedding and `4420/illya_coral_tenseioujototensaireijounomahoukakumei.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4420**, with the score of 0.946. The trigger words are:
1. `illya_coral_tenseioujototensaireijounomahoukakumei`
2. `red_hair, maid, blue_eyes, maid_headdress, bangs, apron, maid_apron, sidelocks, hat, closed_mouth, hair_between_eyes, long_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.890 | [Download](5100/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.840 | [Download](4760/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| **4420** | **0.946** | [**Download**](4420/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.822 | [Download](4080/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.899 | [Download](3740/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.821 | [Download](3400/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.762 | [Download](3060/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.909 | [Download](2720/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.882 | [Download](2380/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.859 | [Download](2040/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.906 | [Download](1700/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.914 | [Download](1360/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.936 | [Download](1020/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.878 | [Download](680/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.808 | [Download](340/illya_coral_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
davera-017/Reinforce-CartPole-v2 | davera-017 | 2023-09-13T20:05:00Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-13T20:04:51Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
eugeniahaluszka/bert_adaptation_peppa_pig | eugeniahaluszka | 2023-09-13T20:04:52Z | 127 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T20:04:32Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_peppa_pig
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_peppa_pig
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9508 | 1.0 | 35 | 3.2785 |
| 2.9376 | 2.0 | 70 | 2.6258 |
| 2.4875 | 3.0 | 105 | 2.2809 |
| 2.2083 | 4.0 | 140 | 2.1301 |
| 2.1109 | 5.0 | 175 | 2.2711 |
| 2.0382 | 6.0 | 210 | 2.2041 |
| 1.9327 | 7.0 | 245 | 2.3881 |
| 1.8549 | 8.0 | 280 | 2.2020 |
| 1.8009 | 9.0 | 315 | 2.1904 |
| 1.8468 | 10.0 | 350 | 2.2052 |
| 1.9202 | 11.0 | 385 | 2.0516 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
sofiabobbiesi/Edgar-Allan-Poe | sofiabobbiesi | 2023-09-13T20:04:00Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:DeepESP/gpt2-spanish",
"base_model:finetune:DeepESP/gpt2-spanish",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-13T20:03:37Z | ---
license: mit
base_model: DeepESP/gpt2-spanish
tags:
- generated_from_trainer
model-index:
- name: Edgar-Allan-Poe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Edgar-Allan-Poe
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1953 | 1.0 | 2416 | 3.1038 |
| 2.927 | 2.0 | 4832 | 3.0185 |
| 2.7176 | 3.0 | 7248 | 2.9858 |
| 2.6847 | 4.0 | 9664 | 2.9803 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
GuntherFrager/cortazar_1 | GuntherFrager | 2023-09-13T20:03:07Z | 149 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:DeepESP/gpt2-spanish",
"base_model:finetune:DeepESP/gpt2-spanish",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-13T19:37:20Z | ---
license: mit
base_model: DeepESP/gpt2-spanish
tags:
- generated_from_trainer
model-index:
- name: Julio-Cortazar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Julio-Cortazar
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.625 | 1.0 | 177 | 2.5849 |
| 2.5334 | 2.0 | 354 | 2.5254 |
| 2.4463 | 3.0 | 531 | 2.4991 |
| 2.2868 | 4.0 | 708 | 2.4825 |
| 2.198 | 5.0 | 885 | 2.4753 |
| 2.1981 | 6.0 | 1062 | 2.4707 |
| 2.096 | 7.0 | 1239 | 2.4655 |
| 1.9984 | 8.0 | 1416 | 2.4642 |
| 1.9602 | 9.0 | 1593 | 2.4631 |
| 2.0752 | 10.0 | 1770 | 2.4636 |
| 2.0311 | 11.0 | 1947 | 2.4635 |
| 2.0731 | 12.0 | 2124 | 2.4635 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
cjuangab/bert_adaptation_martin_fierro | cjuangab | 2023-09-13T20:01:09Z | 126 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T19:33:07Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_martin_fierro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_martin_fierro
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.7508 | 1.0 | 29 | 5.2082 |
| 4.7335 | 2.0 | 58 | 4.4594 |
| 4.1562 | 3.0 | 87 | 4.2792 |
| 3.9629 | 4.0 | 116 | 3.9394 |
| 4.2598 | 5.0 | 145 | 4.3763 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
GreenBitAI/LLaMA-2-7B-4bit-groupsize32 | GreenBitAI | 2023-09-13T19:59:38Z | 78 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-13T19:25:08Z | ---
license: apache-2.0
---
# GreenBit LLaMA
This is GreenBitAI's pretrained **4-bit** LLaMA-2 7B model with advanced compression design and lossless performance to FP16 models.
Please refer to our [Github page](https://github.com/GreenBitAI/low_bit_llama) for the code to run the model and more information.
## Model Description
- **Developed by:** [GreenBitAI](https://github.com/GreenBitAI)
- **Model type:** Causal (Llama 2)
- **Language(s) (NLP):** English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0), [Llama 2 license agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
|
Maximilianoeze/Martin-Fierro | Maximilianoeze | 2023-09-13T19:57:35Z | 140 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:DeepESP/gpt2-spanish",
"base_model:finetune:DeepESP/gpt2-spanish",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-13T19:57:12Z | ---
license: mit
base_model: DeepESP/gpt2-spanish
tags:
- generated_from_trainer
model-index:
- name: Martin-Fierro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Martin-Fierro
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.109 | 1.0 | 40 | 4.9585 |
| 4.6404 | 2.0 | 80 | 4.8648 |
| 4.3806 | 3.0 | 120 | 4.8268 |
| 4.2234 | 4.0 | 160 | 4.8176 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
irfanjafar/my-pet-dog-xzg | irfanjafar | 2023-09-13T19:56:58Z | 0 | 0 | null | [
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-13T19:54:14Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-xzg Dreambooth model trained by irfanjafar following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: MESCOE145
Sample pictures of this concept:
.jpeg)
|
Juniplayground/Mist_LLaMA-2-7B-1024_V10 | Juniplayground | 2023-09-13T19:55:58Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-13T19:55:56Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
Maximilianoeze/bert_adaptation_peppa_pig | Maximilianoeze | 2023-09-13T19:55:26Z | 128 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T19:55:09Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_peppa_pig
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_peppa_pig
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1178 | 1.0 | 35 | 3.5855 |
| 3.0127 | 2.0 | 70 | 2.7329 |
| 2.6519 | 3.0 | 105 | 2.7326 |
| 2.5643 | 4.0 | 140 | 2.5444 |
| 2.2931 | 5.0 | 175 | 2.3511 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
dbarsotti/bert_adaptation_martin_fierro | dbarsotti | 2023-09-13T19:55:24Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T19:55:06Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_martin_fierro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_martin_fierro
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.7508 | 1.0 | 29 | 5.2082 |
| 4.7335 | 2.0 | 58 | 4.4594 |
| 4.1562 | 3.0 | 87 | 4.2792 |
| 3.9629 | 4.0 | 116 | 3.9394 |
| 4.2598 | 5.0 | 145 | 4.3763 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
steffstaurini/bert_adaptation_referencias_de_vinos | steffstaurini | 2023-09-13T19:53:20Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T19:53:07Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_referencias_de_vinos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_referencias_de_vinos
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3006 | 1.0 | 375 | 2.6895 |
| 2.6626 | 2.0 | 750 | 2.4617 |
| 2.4254 | 3.0 | 1125 | 2.3339 |
| 2.3074 | 4.0 | 1500 | 2.3010 |
| 2.2544 | 5.0 | 1875 | 2.3292 |
| 2.2285 | 6.0 | 2250 | 2.2467 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
hdrobins/bert_adaptation_peppa_pig | hdrobins | 2023-09-13T19:51:03Z | 127 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T19:50:16Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_peppa_pig
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_peppa_pig
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6846
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1693 | 1.0 | 35 | 3.4060 |
| 3.2171 | 2.0 | 70 | 3.1686 |
| 2.8351 | 3.0 | 105 | 2.7688 |
| 2.5313 | 4.0 | 140 | 2.5451 |
| 2.4431 | 5.0 | 175 | 2.7364 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Chelo11/Martin-Fierro | Chelo11 | 2023-09-13T19:47:48Z | 139 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:DeepESP/gpt2-spanish",
"base_model:finetune:DeepESP/gpt2-spanish",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-13T19:46:49Z | ---
license: mit
base_model: DeepESP/gpt2-spanish
tags:
- generated_from_trainer
model-index:
- name: Martin-Fierro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Martin-Fierro
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.927 | 1.0 | 10 | 4.7835 |
| 4.5579 | 2.0 | 20 | 4.4981 |
| 4.3378 | 3.0 | 30 | 4.4358 |
| 4.2049 | 4.0 | 40 | 4.3742 |
| 4.0306 | 5.0 | 50 | 4.3384 |
| 3.9757 | 6.0 | 60 | 4.3182 |
| 3.8343 | 7.0 | 70 | 4.3031 |
| 3.7464 | 8.0 | 80 | 4.2927 |
| 3.7114 | 9.0 | 90 | 4.2880 |
| 3.707 | 10.0 | 100 | 4.2863 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jpbaudoin/bert_adaptation_peppa_pig | jpbaudoin | 2023-09-13T19:46:52Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T19:46:33Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_peppa_pig
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_peppa_pig
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2509 | 1.0 | 35 | 2.4300 |
| 2.1185 | 2.0 | 70 | 2.3098 |
| 2.0163 | 3.0 | 105 | 2.1849 |
| 1.8594 | 4.0 | 140 | 2.1367 |
| 1.9299 | 5.0 | 175 | 2.2375 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
FabioGrigorjev/bert_adaptation_referencias_de_vinos | FabioGrigorjev | 2023-09-13T19:46:23Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T19:46:05Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_referencias_de_vinos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_referencias_de_vinos
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1798 | 1.0 | 375 | 2.5403 |
| 2.4869 | 2.0 | 750 | 2.3447 |
| 2.309 | 3.0 | 1125 | 2.2750 |
| 2.2543 | 4.0 | 1500 | 2.1865 |
| 2.2056 | 5.0 | 1875 | 2.1983 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
didicito/bert_adaptation_martin_fierro | didicito | 2023-09-13T19:45:48Z | 127 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T19:45:35Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_martin_fierro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_martin_fierro
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.7508 | 1.0 | 29 | 5.2082 |
| 4.7335 | 2.0 | 58 | 4.4594 |
| 4.1562 | 3.0 | 87 | 4.2792 |
| 3.9629 | 4.0 | 116 | 3.9394 |
| 4.2598 | 5.0 | 145 | 4.3763 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
gongoody/bert_adaptation_referencias_de_vinos | gongoody | 2023-09-13T19:45:35Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T19:44:44Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_referencias_de_vinos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_referencias_de_vinos
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7269 | 1.0 | 375 | 3.0497 |
| 2.9328 | 2.0 | 750 | 2.7255 |
| 2.7404 | 3.0 | 1125 | 2.7349 |
| 2.5953 | 4.0 | 1500 | 2.6279 |
| 2.5463 | 5.0 | 1875 | 2.5809 |
| 2.4262 | 6.0 | 2250 | 2.5138 |
| 2.3802 | 7.0 | 2625 | 2.4732 |
| 2.3381 | 8.0 | 3000 | 2.4881 |
| 2.2689 | 9.0 | 3375 | 2.4361 |
| 2.2619 | 10.0 | 3750 | 2.4239 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
eugene6/ppo-Huggy | eugene6 | 2023-09-13T19:44:54Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-09-13T19:44:49Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: eugene6/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
rghosh8/alpaca7B-lora-support-gpt_ccc | rghosh8 | 2023-09-13T19:37:12Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-07-18T22:31:33Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
Sandiago21/hubert-large-ls960-ft-finetuned-gtzan | Sandiago21 | 2023-09-13T19:36:25Z | 9 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:facebook/hubert-large-ls960-ft",
"base_model:finetune:facebook/hubert-large-ls960-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-07-08T17:02:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
base_model: facebook/hubert-large-ls960-ft
model-index:
- name: hubert-large-ls960-ft-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-large-ls960-ft-finetuned-gtzan
This model is a fine-tuned version of [facebook/hubert-large-ls960-ft](https://huggingface.co/facebook/hubert-large-ls960-ft) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7096
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 18
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2623 | 1.0 | 56 | 2.2399 | 0.21 |
| 1.881 | 1.99 | 112 | 1.7105 | 0.41 |
| 1.5793 | 2.99 | 168 | 1.6203 | 0.46 |
| 1.3018 | 4.0 | 225 | 1.3824 | 0.52 |
| 1.0219 | 5.0 | 281 | 0.9899 | 0.66 |
| 0.9047 | 5.99 | 337 | 0.8812 | 0.74 |
| 0.8353 | 6.99 | 393 | 0.7629 | 0.78 |
| 0.659 | 8.0 | 450 | 0.9674 | 0.71 |
| 0.645 | 9.0 | 506 | 0.8953 | 0.74 |
| 0.6233 | 9.99 | 562 | 0.6638 | 0.8 |
| 0.4155 | 10.99 | 618 | 0.6323 | 0.81 |
| 0.2689 | 12.0 | 675 | 0.5423 | 0.83 |
| 0.3714 | 13.0 | 731 | 0.6770 | 0.83 |
| 0.0692 | 13.99 | 787 | 0.6260 | 0.83 |
| 0.0778 | 14.99 | 843 | 0.5801 | 0.85 |
| 0.187 | 16.0 | 900 | 0.6722 | 0.83 |
| 0.1469 | 17.0 | 956 | 0.7473 | 0.85 |
| 0.1052 | 17.92 | 1008 | 0.7096 | 0.85 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sofiabobbiesi/bert_adaptation_referencias_de_vinos | sofiabobbiesi | 2023-09-13T19:35:19Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T19:35:00Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_referencias_de_vinos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_referencias_de_vinos
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9878 | 1.0 | 625 | 2.4628 |
| 2.4187 | 2.0 | 1250 | 2.2630 |
| 2.2583 | 3.0 | 1875 | 2.1819 |
| 2.1432 | 4.0 | 2500 | 2.0677 |
| 2.0864 | 5.0 | 3125 | 2.0408 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Amirhnrn/poca-SoccerTwos | Amirhnrn | 2023-09-13T19:35:12Z | 30 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-09-13T19:14:24Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Amirhnrn/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Chelo11/bert_adaptation_peppa_pig | Chelo11 | 2023-09-13T19:25:41Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T19:24:53Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_peppa_pig
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_peppa_pig
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.758 | 1.0 | 70 | 3.1082 |
| 2.763 | 2.0 | 140 | 2.5174 |
| 2.3389 | 3.0 | 210 | 2.3997 |
| 2.1706 | 4.0 | 280 | 2.1940 |
| 2.0781 | 5.0 | 350 | 2.4517 |
| 2.0058 | 6.0 | 420 | 2.1393 |
| 1.8748 | 7.0 | 490 | 2.2303 |
| 1.8267 | 8.0 | 560 | 2.2157 |
| 1.7874 | 9.0 | 630 | 2.0251 |
| 1.7614 | 10.0 | 700 | 2.1635 |
| 1.7023 | 11.0 | 770 | 1.9857 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
YhSharaf/output_models | YhSharaf | 2023-09-13T19:23:13Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-13T00:40:24Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks girl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - YhSharaf/output_models
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks girl using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Naevier/bert_adaptation_referencias_de_vinos | Naevier | 2023-09-13T19:17:35Z | 126 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T19:17:10Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_referencias_de_vinos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_referencias_de_vinos
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1637 | 1.0 | 375 | 2.5674 |
| 2.5516 | 2.0 | 750 | 2.4030 |
| 2.3442 | 3.0 | 1125 | 2.2517 |
| 2.247 | 4.0 | 1500 | 2.2516 |
| 2.1986 | 5.0 | 1875 | 2.1737 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/euphyllia_magenta_tenseioujototensaireijounomahoukakumei | CyberHarem | 2023-09-13T19:14:06Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/euphyllia_magenta_tenseioujototensaireijounomahoukakumei",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-13T18:48:22Z | ---
license: mit
datasets:
- CyberHarem/euphyllia_magenta_tenseioujototensaireijounomahoukakumei
pipeline_tag: text-to-image
tags:
- art
---
# Lora of euphyllia_magenta_tenseioujototensaireijounomahoukakumei
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 8960, you need to download `8960/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.pt` as the embedding and `8960/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 8960**, with the score of 0.853. The trigger words are:
1. `euphyllia_magenta_tenseioujototensaireijounomahoukakumei`
2. `purple_eyes, long_hair, bangs, grey_hair, braid, bow, blurry, hair_bow, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | pattern_20 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9600 | 0.849 | [Download](9600/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9600/previews/nude.png) | [<NSFW, click to see>](9600/previews/nude2.png) |  |  |
| **8960** | **0.853** | [**Download**](8960/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8960/previews/nude.png) | [<NSFW, click to see>](8960/previews/nude2.png) |  |  |
| 8320 | 0.843 | [Download](8320/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8320/previews/nude.png) | [<NSFW, click to see>](8320/previews/nude2.png) |  |  |
| 7680 | 0.802 | [Download](7680/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7680/previews/nude.png) | [<NSFW, click to see>](7680/previews/nude2.png) |  |  |
| 7040 | 0.841 | [Download](7040/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7040/previews/nude.png) | [<NSFW, click to see>](7040/previews/nude2.png) |  |  |
| 6400 | 0.808 | [Download](6400/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6400/previews/nude.png) | [<NSFW, click to see>](6400/previews/nude2.png) |  |  |
| 5760 | 0.824 | [Download](5760/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5120 | 0.808 | [Download](5120/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5120/previews/nude.png) | [<NSFW, click to see>](5120/previews/nude2.png) |  |  |
| 4480 | 0.803 | [Download](4480/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4480/previews/nude.png) | [<NSFW, click to see>](4480/previews/nude2.png) |  |  |
| 3840 | 0.777 | [Download](3840/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3200 | 0.834 | [Download](3200/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3200/previews/nude.png) | [<NSFW, click to see>](3200/previews/nude2.png) |  |  |
| 2560 | 0.777 | [Download](2560/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2560/previews/nude.png) | [<NSFW, click to see>](2560/previews/nude2.png) |  |  |
| 1920 | 0.775 | [Download](1920/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1280 | 0.737 | [Download](1280/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1280/previews/nude.png) | [<NSFW, click to see>](1280/previews/nude2.png) |  |  |
| 640 | 0.746 | [Download](640/euphyllia_magenta_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](640/previews/nude.png) | [<NSFW, click to see>](640/previews/nude2.png) |  |  |
|
markjosims/wav2vec2-large-mms-1b-tira-lid | markjosims | 2023-09-13T19:10:44Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-08-28T20:14:57Z | ---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-large-mms-1b-tira-lid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-tira-lid
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0026
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3168 | 0.42 | 100 | 0.2023 | 0.9167 |
| 0.3278 | 0.84 | 200 | 0.1465 | 0.9667 |
| 0.2725 | 1.26 | 300 | 0.6432 | 0.8 |
| 0.1371 | 1.67 | 400 | 0.0144 | 1.0 |
| 0.094 | 2.09 | 500 | 0.0015 | 1.0 |
| 0.0654 | 2.51 | 600 | 0.0978 | 0.9667 |
| 0.1813 | 2.93 | 700 | 0.1174 | 0.9833 |
| 0.032 | 3.35 | 800 | 0.0019 | 1.0 |
| 0.0422 | 3.77 | 900 | 0.0026 | 1.0 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
davera-017/ppo-LunarLander-v2 | davera-017 | 2023-09-13T19:00:42Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
]
| reinforcement-learning | 2023-08-08T17:07:28Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -89.76 +/- 56.00
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'davera-017/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
badhorse666/ppo-Pyramids | badhorse666 | 2023-09-13T18:56:02Z | 13 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-09-13T18:55:27Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: badhorse666/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
artyomboyko/ppo-LunarLander-v2 | artyomboyko | 2023-09-13T18:55:27Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-13T18:55:19Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 291.27 +/- 22.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
1juanromero/distilbert-base-uncased-finetuned-cola | 1juanromero | 2023-09-13T18:49:55Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-11T04:29:00Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5361146089547957
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8314
- Matthews Correlation: 0.5361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5225 | 1.0 | 535 | 0.4681 | 0.4755 |
| 0.3484 | 2.0 | 1070 | 0.5702 | 0.4636 |
| 0.2367 | 3.0 | 1605 | 0.6448 | 0.5294 |
| 0.1693 | 4.0 | 2140 | 0.7307 | 0.5344 |
| 0.1216 | 5.0 | 2675 | 0.8314 | 0.5361 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
fetiska/downtown-cab-co | fetiska | 2023-09-13T18:44:48Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-13T18:44:46Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: downtown-cab-co
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fetiska/downtown-cab-co", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fetiska/elsa | fetiska | 2023-09-13T18:33:21Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-13T18:33:19Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: elsa
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="fetiska/elsa", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tayar3/ckpt-1-0 | tayar3 | 2023-09-13T18:32:22Z | 31 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-13T18:28:37Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ckpt-1.0 Dreambooth model trained by tayar3 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
muhtasham/bert-tiny-finetuned-cuad | muhtasham | 2023-09-13T18:25:17Z | 108 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:cuad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-08-01T20:34:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cuad
model-index:
- name: tiny-bert-finetuned-cuad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-bert-finetuned-cuad
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the portion of cuad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4606
# Note
The model was not trained on the whole dataset but, the first 10% of `train` + the first 10% of `test`.
```bash
raw_datasets_train, raw_datasets_test = load_dataset("cuad", split=['train[:10%]', 'test[:10%]'])
datasets = DatasetDict({'train': raw_datasets_train, 'validation': raw_datasets_test})
```
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1024
- eval_batch_size: 1024
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 136 | 2.9644 |
| No log | 2.0 | 272 | 1.9337 |
| No log | 3.0 | 408 | 1.4375 |
| 2.7124 | 4.0 | 544 | 1.0978 |
| 2.7124 | 5.0 | 680 | 0.8571 |
| 2.7124 | 6.0 | 816 | 0.6907 |
| 2.7124 | 7.0 | 952 | 0.5799 |
| 0.9512 | 8.0 | 1088 | 0.5105 |
| 0.9512 | 9.0 | 1224 | 0.4726 |
| 0.9512 | 10.0 | 1360 | 0.4606 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
muhtasham/bert-tiny-finetuned-legal-contracts-longer | muhtasham | 2023-09-13T18:24:52Z | 265 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"dataset:albertvillanova/legal_contracts",
"base_model:google/bert_uncased_L-2_H-128_A-2",
"base_model:finetune:google/bert_uncased_L-2_H-128_A-2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-03T23:19:41Z | ---
datasets:
- albertvillanova/legal_contracts
base_model: google/bert_uncased_L-2_H-128_A-2
---
# bert-tiny-finetuned-legal-contracts-longer
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/google/bert_uncased_L-2_H-128_A-2) on the portion of legal_contracts dataset but for longer epochs.
# Note
The model was not trained on the whole dataset which is around 9.5 GB, but only
## The first 10% of `train` + the last 10% of `train`.
```bash
datasets_train = load_dataset('albertvillanova/legal_contracts' , split='train[:10%]')
datasets_validation = load_dataset('albertvillanova/legal_contracts' , split='train[-10%:]')
```
|
muhtasham/base-mlm-imdb | muhtasham | 2023-09-13T18:24:21Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google/bert_uncased_L-12_H-768_A-12",
"base_model:finetune:google/bert_uncased_L-12_H-768_A-12",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-12-11T00:55:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: google/bert_uncased_L-12_H-768_A-12
model-index:
- name: base-mlm-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-mlm-imdb
This model is a fine-tuned version of [google/bert_uncased_L-12_H-768_A-12](https://huggingface.co/google/bert_uncased_L-12_H-768_A-12) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1149 | 0.16 | 500 | 1.9627 |
| 2.0674 | 0.32 | 1000 | 1.9620 |
| 2.0582 | 0.48 | 1500 | 1.9502 |
| 2.0542 | 0.64 | 2000 | 1.9418 |
| 2.0398 | 0.8 | 2500 | 1.9223 |
| 2.037 | 0.96 | 3000 | 1.9220 |
| 1.9831 | 1.12 | 3500 | 1.9247 |
| 1.972 | 1.28 | 4000 | 1.9123 |
| 1.9708 | 1.44 | 4500 | 1.9122 |
| 1.967 | 1.6 | 5000 | 1.9097 |
| 1.9582 | 1.76 | 5500 | 1.9085 |
| 1.9715 | 1.92 | 6000 | 1.9099 |
| 1.9459 | 2.08 | 6500 | 1.9113 |
| 1.9384 | 2.24 | 7000 | 1.9103 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/tiny-mlm-glue-mnli | muhtasham | 2023-09-13T18:23:43Z | 109 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google/bert_uncased_L-2_H-128_A-2",
"base_model:finetune:google/bert_uncased_L-2_H-128_A-2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-05T20:26:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: google/bert_uncased_L-2_H-128_A-2
model-index:
- name: tiny-mlm-glue-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-mnli
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4196 | 0.4 | 500 | 3.9829 |
| 4.3712 | 0.8 | 1000 | 4.0000 |
| 4.3439 | 1.2 | 1500 | 3.9642 |
| 4.2725 | 1.6 | 2000 | 3.9736 |
| 4.2908 | 2.0 | 2500 | 3.9309 |
| 4.1935 | 2.4 | 3000 | 3.9395 |
| 4.1935 | 2.8 | 3500 | 3.9470 |
| 4.1731 | 3.2 | 4000 | 3.9722 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
muhtasham/tiny-mlm-glue-qqp | muhtasham | 2023-09-13T18:23:31Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google/bert_uncased_L-2_H-128_A-2",
"base_model:finetune:google/bert_uncased_L-2_H-128_A-2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-05T21:00:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: google/bert_uncased_L-2_H-128_A-2
model-index:
- name: tiny-mlm-glue-qqp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-qqp
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.313 | 0.4 | 500 | 3.8831 |
| 4.1839 | 0.8 | 1000 | 3.7763 |
| 4.0803 | 1.2 | 1500 | 3.6769 |
| 3.9319 | 1.6 | 2000 | 3.6800 |
| 3.8968 | 2.0 | 2500 | 3.6765 |
| 3.8441 | 2.4 | 3000 | 3.5576 |
| 3.7809 | 2.8 | 3500 | 3.6472 |
| 3.8456 | 3.2 | 4000 | 3.5526 |
| 3.682 | 3.6 | 4500 | 3.5926 |
| 3.749 | 4.0 | 5000 | 3.5727 |
| 3.6818 | 4.4 | 5500 | 3.5533 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
muhtasham/tiny-mlm-glue-rte | muhtasham | 2023-09-13T18:23:23Z | 118 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google/bert_uncased_L-2_H-128_A-2",
"base_model:finetune:google/bert_uncased_L-2_H-128_A-2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-05T21:12:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: google/bert_uncased_L-2_H-128_A-2
model-index:
- name: tiny-mlm-glue-rte
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-rte
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.2975 | 1.6 | 500 | 3.7350 |
| 4.2242 | 3.21 | 1000 | 3.9463 |
| 4.1731 | 4.81 | 1500 | 3.7550 |
| 4.088 | 6.41 | 2000 | 3.7817 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
muhtasham/bert-small-finetuned-legal-contracts10train10val | muhtasham | 2023-09-13T18:23:13Z | 171 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"dataset:albertvillanova/legal_contracts",
"base_model:google/bert_uncased_L-4_H-512_A-8",
"base_model:finetune:google/bert_uncased_L-4_H-512_A-8",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-14T08:54:53Z | ---
datasets:
- albertvillanova/legal_contracts
base_model: google/bert_uncased_L-4_H-512_A-8
---
# bert-tiny-finetuned-legal-contracts-longer
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/google/bert_uncased_L-4_H-512_A-8) on the portion of legal_contracts dataset.
# Note
The model was not trained on the whole dataset which is around 9.5 GB, but only
## The first 10% of `train` + the last 10% of `train`.
```bash
datasets_train = load_dataset('albertvillanova/legal_contracts' , split='train[:10%]')
datasets_validation = load_dataset('albertvillanova/legal_contracts' , split='train[-10%:]')
```
|
muhtasham/tiny-vanilla-target-tweet | muhtasham | 2023-09-13T18:22:30Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"base_model:google/bert_uncased_L-2_H-128_A-2",
"base_model:finetune:google/bert_uncased_L-2_H-128_A-2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-12-11T06:30:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
base_model: google/bert_uncased_L-2_H-128_A-2
model-index:
- name: tiny-vanilla-target-tweet
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: train
args: emotion
metrics:
- type: accuracy
value: 0.7032085561497327
name: Accuracy
- type: f1
value: 0.704229444708009
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-vanilla-target-tweet
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9887
- Accuracy: 0.7032
- F1: 0.7042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1604 | 4.9 | 500 | 0.9784 | 0.6604 | 0.6290 |
| 0.7656 | 9.8 | 1000 | 0.8273 | 0.7139 | 0.6905 |
| 0.534 | 14.71 | 1500 | 0.8138 | 0.7219 | 0.7143 |
| 0.3832 | 19.61 | 2000 | 0.8591 | 0.7086 | 0.7050 |
| 0.2722 | 24.51 | 2500 | 0.9250 | 0.7112 | 0.7118 |
| 0.1858 | 29.41 | 3000 | 0.9887 | 0.7032 | 0.7042 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
muhtasham/bert-small-finetuned-eurlex | muhtasham | 2023-09-13T18:21:33Z | 182 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google/bert_uncased_L-4_H-512_A-8",
"base_model:finetune:google/bert_uncased_L-4_H-512_A-8",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-14T21:46:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: google/bert_uncased_L-4_H-512_A-8
model-index:
- name: bert_uncased_L-4_H-512_A-8-finetuned-eurlex
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-4_H-512_A-8-finetuned-eurlex
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9798
- eval_runtime: 51.2571
- eval_samples_per_second: 638.916
- eval_steps_per_second: 79.872
- epoch: 2.0
- step: 62604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
muhtasham/bert-small-finetuned-legal-definitions | muhtasham | 2023-09-13T18:21:24Z | 180 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:finiteautomata/legal-definitions",
"base_model:google/bert_uncased_L-4_H-512_A-8",
"base_model:finetune:google/bert_uncased_L-4_H-512_A-8",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-14T00:38:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- finiteautomata/legal-definitions
base_model: google/bert_uncased_L-4_H-512_A-8
model-index:
- name: bert_uncased_L-4_H-512_A-8-finetuned-legal-definitions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-4_H-512_A-8-finetuned-legal-definitions
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the legal-definitions dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7317 | 1.0 | 6401 | 1.6642 |
| 1.5578 | 2.0 | 12802 | 1.5784 |
| 1.5051 | 3.0 | 19203 | 1.4970 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
muhtasham/bert-small-finetuned-eurlex-longer | muhtasham | 2023-09-13T18:21:11Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google/bert_uncased_L-4_H-512_A-8",
"base_model:finetune:google/bert_uncased_L-4_H-512_A-8",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-14T23:24:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: google/bert_uncased_L-4_H-512_A-8
model-index:
- name: bert_uncased_L-4_H-512_A-8-finetuned-eurlex-longer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-4_H-512_A-8-finetuned-eurlex-longer
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0308 | 1.0 | 3913 | 0.9303 |
| 1.0053 | 2.0 | 7826 | 0.9066 |
| 0.9799 | 3.0 | 11739 | 0.8795 |
| 0.9575 | 4.0 | 15652 | 0.8677 |
| 0.9413 | 5.0 | 19565 | 0.8555 |
| 0.9319 | 6.0 | 23478 | 0.8447 |
| 0.9235 | 7.0 | 27391 | 0.8388 |
| 0.9132 | 8.0 | 31304 | 0.8345 |
| 0.9105 | 9.0 | 35217 | 0.8312 |
| 0.9154 | 10.0 | 39130 | 0.8229 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
loupzeur/ppo-LunarLander-v2 | loupzeur | 2023-09-13T18:16:32Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-13T17:54:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.11 +/- 14.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
faisito/xlm-roberta-base-finetuned-panx-fr | faisito | 2023-09-13T18:14:15Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-11-19T06:39:44Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
base_model: xlm-roberta-base
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: train
args: PAN-X.fr
metrics:
- type: f1
value: 0.8357154868000672
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2854
- F1: 0.8357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5807 | 1.0 | 191 | 0.3231 | 0.7878 |
| 0.2567 | 2.0 | 382 | 0.2997 | 0.8154 |
| 0.1709 | 3.0 | 573 | 0.2854 | 0.8357 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
CyberHarem/natsume_pokemon | CyberHarem | 2023-09-13T18:10:22Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/natsume_pokemon",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-13T17:56:02Z | ---
license: mit
datasets:
- CyberHarem/natsume_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of natsume_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5500, you need to download `5500/natsume_pokemon.pt` as the embedding and `5500/natsume_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5500**, with the score of 0.442. The trigger words are:
1. `natsume_pokemon`
2. `long_hair, bangs, red_eyes, breasts, blunt_bangs, black_hair, bodysuit, blush, large_breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7500 | 0.267 | [Download](7500/natsume_pokemon.zip) | [<NSFW, click to see>](7500/previews/pattern_1.png) | [<NSFW, click to see>](7500/previews/pattern_2.png) | [<NSFW, click to see>](7500/previews/pattern_3.png) | [<NSFW, click to see>](7500/previews/pattern_4.png) | [<NSFW, click to see>](7500/previews/pattern_5.png) | [<NSFW, click to see>](7500/previews/pattern_6.png) |  | [<NSFW, click to see>](7500/previews/bondage.png) | [<NSFW, click to see>](7500/previews/free.png) |  |  | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) |  |  |
| 7000 | 0.224 | [Download](7000/natsume_pokemon.zip) | [<NSFW, click to see>](7000/previews/pattern_1.png) | [<NSFW, click to see>](7000/previews/pattern_2.png) | [<NSFW, click to see>](7000/previews/pattern_3.png) | [<NSFW, click to see>](7000/previews/pattern_4.png) | [<NSFW, click to see>](7000/previews/pattern_5.png) | [<NSFW, click to see>](7000/previews/pattern_6.png) |  | [<NSFW, click to see>](7000/previews/bondage.png) | [<NSFW, click to see>](7000/previews/free.png) |  |  | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) |  |  |
| 6500 | 0.309 | [Download](6500/natsume_pokemon.zip) | [<NSFW, click to see>](6500/previews/pattern_1.png) | [<NSFW, click to see>](6500/previews/pattern_2.png) | [<NSFW, click to see>](6500/previews/pattern_3.png) | [<NSFW, click to see>](6500/previews/pattern_4.png) | [<NSFW, click to see>](6500/previews/pattern_5.png) | [<NSFW, click to see>](6500/previews/pattern_6.png) |  | [<NSFW, click to see>](6500/previews/bondage.png) | [<NSFW, click to see>](6500/previews/free.png) |  |  | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) |  |  |
| 6000 | 0.263 | [Download](6000/natsume_pokemon.zip) | [<NSFW, click to see>](6000/previews/pattern_1.png) | [<NSFW, click to see>](6000/previews/pattern_2.png) | [<NSFW, click to see>](6000/previews/pattern_3.png) | [<NSFW, click to see>](6000/previews/pattern_4.png) | [<NSFW, click to see>](6000/previews/pattern_5.png) | [<NSFW, click to see>](6000/previews/pattern_6.png) |  | [<NSFW, click to see>](6000/previews/bondage.png) | [<NSFW, click to see>](6000/previews/free.png) |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| **5500** | **0.442** | [**Download**](5500/natsume_pokemon.zip) | [<NSFW, click to see>](5500/previews/pattern_1.png) | [<NSFW, click to see>](5500/previews/pattern_2.png) | [<NSFW, click to see>](5500/previews/pattern_3.png) | [<NSFW, click to see>](5500/previews/pattern_4.png) | [<NSFW, click to see>](5500/previews/pattern_5.png) | [<NSFW, click to see>](5500/previews/pattern_6.png) |  | [<NSFW, click to see>](5500/previews/bondage.png) | [<NSFW, click to see>](5500/previews/free.png) |  |  | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) |  |  |
| 5000 | 0.379 | [Download](5000/natsume_pokemon.zip) | [<NSFW, click to see>](5000/previews/pattern_1.png) | [<NSFW, click to see>](5000/previews/pattern_2.png) | [<NSFW, click to see>](5000/previews/pattern_3.png) | [<NSFW, click to see>](5000/previews/pattern_4.png) | [<NSFW, click to see>](5000/previews/pattern_5.png) | [<NSFW, click to see>](5000/previews/pattern_6.png) |  | [<NSFW, click to see>](5000/previews/bondage.png) | [<NSFW, click to see>](5000/previews/free.png) |  |  | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) |  |  |
| 4500 | 0.442 | [Download](4500/natsume_pokemon.zip) | [<NSFW, click to see>](4500/previews/pattern_1.png) | [<NSFW, click to see>](4500/previews/pattern_2.png) | [<NSFW, click to see>](4500/previews/pattern_3.png) | [<NSFW, click to see>](4500/previews/pattern_4.png) | [<NSFW, click to see>](4500/previews/pattern_5.png) | [<NSFW, click to see>](4500/previews/pattern_6.png) |  | [<NSFW, click to see>](4500/previews/bondage.png) | [<NSFW, click to see>](4500/previews/free.png) |  |  | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) |  |  |
| 4000 | 0.347 | [Download](4000/natsume_pokemon.zip) | [<NSFW, click to see>](4000/previews/pattern_1.png) | [<NSFW, click to see>](4000/previews/pattern_2.png) | [<NSFW, click to see>](4000/previews/pattern_3.png) | [<NSFW, click to see>](4000/previews/pattern_4.png) | [<NSFW, click to see>](4000/previews/pattern_5.png) | [<NSFW, click to see>](4000/previews/pattern_6.png) |  | [<NSFW, click to see>](4000/previews/bondage.png) | [<NSFW, click to see>](4000/previews/free.png) |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| 3500 | 0.287 | [Download](3500/natsume_pokemon.zip) | [<NSFW, click to see>](3500/previews/pattern_1.png) | [<NSFW, click to see>](3500/previews/pattern_2.png) | [<NSFW, click to see>](3500/previews/pattern_3.png) | [<NSFW, click to see>](3500/previews/pattern_4.png) | [<NSFW, click to see>](3500/previews/pattern_5.png) | [<NSFW, click to see>](3500/previews/pattern_6.png) |  | [<NSFW, click to see>](3500/previews/bondage.png) | [<NSFW, click to see>](3500/previews/free.png) |  |  | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) |  |  |
| 3000 | 0.381 | [Download](3000/natsume_pokemon.zip) | [<NSFW, click to see>](3000/previews/pattern_1.png) | [<NSFW, click to see>](3000/previews/pattern_2.png) | [<NSFW, click to see>](3000/previews/pattern_3.png) | [<NSFW, click to see>](3000/previews/pattern_4.png) | [<NSFW, click to see>](3000/previews/pattern_5.png) | [<NSFW, click to see>](3000/previews/pattern_6.png) |  | [<NSFW, click to see>](3000/previews/bondage.png) | [<NSFW, click to see>](3000/previews/free.png) |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2500 | 0.301 | [Download](2500/natsume_pokemon.zip) | [<NSFW, click to see>](2500/previews/pattern_1.png) | [<NSFW, click to see>](2500/previews/pattern_2.png) | [<NSFW, click to see>](2500/previews/pattern_3.png) | [<NSFW, click to see>](2500/previews/pattern_4.png) | [<NSFW, click to see>](2500/previews/pattern_5.png) | [<NSFW, click to see>](2500/previews/pattern_6.png) |  | [<NSFW, click to see>](2500/previews/bondage.png) | [<NSFW, click to see>](2500/previews/free.png) |  |  | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) |  |  |
| 2000 | 0.219 | [Download](2000/natsume_pokemon.zip) | [<NSFW, click to see>](2000/previews/pattern_1.png) | [<NSFW, click to see>](2000/previews/pattern_2.png) | [<NSFW, click to see>](2000/previews/pattern_3.png) | [<NSFW, click to see>](2000/previews/pattern_4.png) | [<NSFW, click to see>](2000/previews/pattern_5.png) | [<NSFW, click to see>](2000/previews/pattern_6.png) |  | [<NSFW, click to see>](2000/previews/bondage.png) | [<NSFW, click to see>](2000/previews/free.png) |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1500 | 0.228 | [Download](1500/natsume_pokemon.zip) | [<NSFW, click to see>](1500/previews/pattern_1.png) | [<NSFW, click to see>](1500/previews/pattern_2.png) | [<NSFW, click to see>](1500/previews/pattern_3.png) | [<NSFW, click to see>](1500/previews/pattern_4.png) | [<NSFW, click to see>](1500/previews/pattern_5.png) | [<NSFW, click to see>](1500/previews/pattern_6.png) |  | [<NSFW, click to see>](1500/previews/bondage.png) | [<NSFW, click to see>](1500/previews/free.png) |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) |  |  |
| 1000 | 0.346 | [Download](1000/natsume_pokemon.zip) | [<NSFW, click to see>](1000/previews/pattern_1.png) | [<NSFW, click to see>](1000/previews/pattern_2.png) | [<NSFW, click to see>](1000/previews/pattern_3.png) | [<NSFW, click to see>](1000/previews/pattern_4.png) | [<NSFW, click to see>](1000/previews/pattern_5.png) | [<NSFW, click to see>](1000/previews/pattern_6.png) |  | [<NSFW, click to see>](1000/previews/bondage.png) | [<NSFW, click to see>](1000/previews/free.png) |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) |  |  |
| 500 | 0.167 | [Download](500/natsume_pokemon.zip) | [<NSFW, click to see>](500/previews/pattern_1.png) | [<NSFW, click to see>](500/previews/pattern_2.png) | [<NSFW, click to see>](500/previews/pattern_3.png) | [<NSFW, click to see>](500/previews/pattern_4.png) | [<NSFW, click to see>](500/previews/pattern_5.png) | [<NSFW, click to see>](500/previews/pattern_6.png) |  | [<NSFW, click to see>](500/previews/bondage.png) | [<NSFW, click to see>](500/previews/free.png) |  |  | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) |  |  |
|
plncmm/bert-clinical-scratch-wl-es | plncmm | 2023-09-13T17:59:37Z | 157 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-06-14T14:45:39Z | ---
tags:
- generated_from_trainer
base_model: dccuchile/bert-base-spanish-wwm-uncased
model-index:
- name: bert-clincal-scratch-wl-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-clincal-scratch-wl-es
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Addedk/mbert-swedish-distilled-cased | Addedk | 2023-09-13T17:57:54Z | 125 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"sv",
"dataset:KBLab/sucx3_ner",
"arxiv:2103.06418",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-06-15T12:22:20Z | ---
language:
- multilingual
- sv
license: apache-2.0
datasets: KBLab/sucx3_ner
---
# mBERT swedish distilled base model (cased)
This model is a distilled version of [mBERT](https://huggingface.co/bert-base-multilingual-cased). It was distilled using Swedish data, the 2010-2015 portion of the [Swedish Culturomics Gigaword Corpus](https://spraakbanken.gu.se/en/resources/gigaword). The code for the distillation process can be found [here](https://github.com/AddedK/swedish-mbert-distillation/blob/main/azureML/pretrain_distillation.py). This was done as part of my Master's Thesis: [*Task-agnostic knowledge distillation of mBERT to Swedish*](https://kth.diva-portal.org/smash/record.jsf?aq2=%5B%5B%5D%5D&c=2&af=%5B%5D&searchType=UNDERGRADUATE&sortOrder2=title_sort_asc&language=en&pid=diva2%3A1698451&aq=%5B%5B%7B%22freeText%22%3A%22added+kina%22%7D%5D%5D&sf=all&aqe=%5B%5D&sortOrder=author_sort_asc&onlyFullText=false&noOfRows=50&dswid=-6142).
## Model description
This is a 6-layer version of mBERT, having been distilled using the [LightMBERT](https://arxiv.org/abs/2103.06418) distillation method, but without freezing the embedding layer.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task.
## Training data
The data used for distillation was the 2010-2015 portion of the [Swedish Culturomics Gigaword Corpus](https://spraakbanken.gu.se/en/resources/gigaword).
The tokenized data had a file size of approximately 9 GB.
## Evaluation results
When evaluated on the [SUCX 3.0 ](https://huggingface.co/datasets/KBLab/sucx3_ner) dataset, it achieved an average F1 score of 0.859 which is competitive with the score mBERT obtained, 0.866.
When evaluated on the [English WikiANN](https://huggingface.co/datasets/wikiann) dataset, it achieved an average F1 score of 0.826 which is competitive with the score mBERT obtained, 0.849.
Additional results and comparisons are presented in my Master's Thesis
|
AOLCDROM/Tortoise-TTS-de | AOLCDROM | 2023-09-13T17:51:31Z | 0 | 6 | null | [
"region:us"
]
| null | 2023-08-31T15:42:37Z | Tortoise TTS AR model fine-tuned for German
Trained on 3 speakers; 2 LibriVox readers, and Thorsten Mueller's dataset https://github.com/thorstenMueller/Thorsten-Voice
***THE NEWEST VERSIONS***: v# indicates the number of training sessions, #e is how many epochs.
9/5 training session uploaded
Requires the tokenizer file placed in the tokenizers/ directory
Voice latents are pre-computed in voices/ for some uploaded versions. Voice samples to recompute latents are uploaded.
For use in MRQ Voice Cloning WebUI:
Requires the tokenizer used in training, and code changes to disable text cleaners. At minimum, change english_cleaners to basic_cleaners.
Code changes:
modules\tortoise-tts\tortoise\utils\tokenizer.py
Change Line 201: txt = english_cleaners(txt) and replace it
with txt = basic_cleaners(txt)
modules\tortoise-tts\build\lib\tortoise\utils\tokenizer.py
Change Line 201: txt = english_cleaners(txt) and replace it
with txt = basic_cleaners(txt)
\modules\dlas\dlas\data\audio\paired_voice_audio_dataset.py
Line 133: return text_to_sequence(txt, ['english_cleaners'])
and replace it with: return text_to_sequence(txt, ['basic_cleaners'])
modules\dlas\dlas\data\audio\voice_tokenizer.py
Line 14: from dlas.models.audio.tts.tacotron2.text.cleaners import
english_cleaners
to: from dlas.models.audio.tts.tacotron2.text.cleaners import
english_cleaners, basic_cleaners
Line 85: txt = english_cleaners(txt) to txt =
basic_cleaners(txt)
Line 134: word = english_cleaners(word) to basic_cleaners(word)
Copy and paste German text into the tokenizer tester on the utilities
tab, and you should see it tokenized with all of the special
characters, and no [UNK].
---
license: other
language:
- de
--- |
Undi95/ReMM-v1-LRPSGPT-2Char-13B | Undi95 | 2023-09-13T17:50:10Z | 16 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-13T15:26:39Z | ---
license: cc-by-nc-4.0
---
This model is based on ReMM v1 merged with the ShareGPT-13b-qloras for "2 characters".
<!-- description start -->
## Description
This repo contains fp16 files of ReMM-v1-LRPSGPT-2Char-13B, and is made to be used with character card containing "TWO PERSONAS".
<!-- description end -->
<!-- description start -->
## LoRA used
https://huggingface.co/royallab/LimaRP-ShareGPT-13b-qloras/tree/main/prompt-a/twochar
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Custom
```
Enter roleplay mode. You are currently %{having a conversation|in conversation|in a roleplay chat} with <SECOND>, whose %{traits are|persona is|characteristics are}:
<SECOND PERSONA>
%{You are|Play the role of|Take the role of} <FIRST> with the following %{persona|definitions|character sheet|traits}:
<FIRST PERSONA>
%{In addition|Additionally|Also}, %{keep the following scenario in mind|remember this scenario|pay attention to this scenario}:
<SCENARIO>
```
Or try to use Chat without instruction.
More info: https://huggingface.co/royallab/LimaRP-ShareGPT-13b-qloras/blob/main/prompt-a/README.md
Special thanks to Sushi ♥ |
CHIH-HUNG/llama-2-13b-FINETUNE1_17w-gate_up_down_proj | CHIH-HUNG | 2023-09-13T17:40:58Z | 1,548 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:huangyt/FINETUNE1",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-03T02:15:43Z | ---
license: llama2
datasets:
- huangyt/FINETUNE1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
在llama-2-13b上使用huangyt/FINETUNE1資料集進行訓練,總資料筆數約17w
# Fine-Tuning Information
- **GPU:** RTX4090 (single core / 24564MiB)
- **model:** meta-llama/Llama-2-13b-hf
- **dataset:** huangyt/FINETUNE1 (共約17w筆訓練集)
- **peft_type:** LoRA
- **lora_rank:** 8
- **lora_target:** gate_proj, up_proj, down_proj
- **per_device_train_batch_size:** 8
- **gradient_accumulation_steps:** 8
- **learning_rate :** 5e-5
- **epoch:** 1
- **precision:** bf16
- **quantization:** load_in_4bit
# Fine-Tuning Detail
- **train_loss:** 0.66
- **train_runtime:** 16:24:31 (use deepspeed)
# Evaluation
- 評估結果來自**HuggingFaceH4/open_llm_leaderboard**
- 與Llama-2-13b比較4種Benchmark,包含**ARC**、**HellaSwag**、**MMLU**、**TruthfulQA**
| Model |Average| ARC |HellaSwag| MMLU |TruthfulQA|
|--------------------------------------------------------|-------|-------|---------|-------|----------|
|meta-llama/Llama-2-13b-hf | 56.9 | 58.11 | 80.97 | 54.34 | 34.17 |
|meta-llama/Llama-2-13b-chat-hf | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 |
|CHIH-HUNG/llama-2-13b-Fintune_1_17w | 58.24 | 59.47 | 81 | 54.31 | 38.17 |
|CHIH-HUNG/llama-2-13b-huangyt_Fintune_1_17w-q_k_v_o_proj| 58.49 | 59.73 | 81.06 | 54.53 | 38.64 |
|CHIH-HUNG/llama-2-13b-Fintune_1_17w-gate_up_down_proj | 58.81 | 57.17 | 82.26 | 55.89 | 39.93 |
|CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16 | 58.86 | 57.25 | 82.27 | 56.16 | 39.75 |
|CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r4 | 58.71 | 56.74 | 82.27 | 56.18 | 39.65 |
# How to convert dataset to json
- 在**load_dataset**中輸入資料集名稱,並且在**take**中輸入要取前幾筆資料
- 觀察該資料集的欄位名稱,填入**example**欄位中(例如system_prompt、question、response)
- 最後指定json檔儲存位置 (**json_filename**)
```py
import json
from datasets import load_dataset
# 讀取數據集,take可以取得該數據集前n筆資料
dataset = load_dataset("huangyt/FINETUNE1", split="train", streaming=True)
# 提取所需欄位並建立新的字典列表
extracted_data = []
for example in dataset:
extracted_example = {
"instruction": example["instruction"],
"input": example["input"],
"output": example["output"]
}
extracted_data.append(extracted_example)
# 指定 JSON 文件名稱
json_filename = "huangyt_FINETUNE1.json"
# 寫入 JSON 文件
with open(json_filename, "w") as json_file:
json.dump(extracted_data, json_file, indent=4)
print(f"數據已提取並保存為 {json_filename}")
``` |
CyberHarem/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei | CyberHarem | 2023-09-13T17:40:38Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-13T17:18:19Z | ---
license: mit
datasets:
- CyberHarem/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei
pipeline_tag: text-to-image
tags:
- art
---
# Lora of anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 8680, you need to download `8680/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.pt` as the embedding and `8680/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 8680**, with the score of 0.882. The trigger words are:
1. `anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei`
2. `blonde_hair, green_eyes, bangs, short_hair, ribbon, hair_ribbon, ahoge, hair_between_eyes, smile, black_ribbon, blurry`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9300 | 0.872 | [Download](9300/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9300/previews/nude.png) | [<NSFW, click to see>](9300/previews/nude2.png) |  |  |
| **8680** | **0.882** | [**Download**](8680/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8680/previews/nude.png) | [<NSFW, click to see>](8680/previews/nude2.png) |  |  |
| 8060 | 0.862 | [Download](8060/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8060/previews/nude.png) | [<NSFW, click to see>](8060/previews/nude2.png) |  |  |
| 7440 | 0.873 | [Download](7440/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7440/previews/nude.png) | [<NSFW, click to see>](7440/previews/nude2.png) |  |  |
| 6820 | 0.811 | [Download](6820/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6820/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6820/previews/nude.png) | [<NSFW, click to see>](6820/previews/nude2.png) |  |  |
| 6200 | 0.857 | [Download](6200/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6200/previews/nude.png) | [<NSFW, click to see>](6200/previews/nude2.png) |  |  |
| 5580 | 0.840 | [Download](5580/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5580/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5580/previews/nude.png) | [<NSFW, click to see>](5580/previews/nude2.png) |  |  |
| 4960 | 0.830 | [Download](4960/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4960/previews/nude.png) | [<NSFW, click to see>](4960/previews/nude2.png) |  |  |
| 4340 | 0.845 | [Download](4340/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4340/previews/nude.png) | [<NSFW, click to see>](4340/previews/nude2.png) |  |  |
| 3720 | 0.858 | [Download](3720/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3720/previews/nude.png) | [<NSFW, click to see>](3720/previews/nude2.png) |  |  |
| 3100 | 0.803 | [Download](3100/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3100/previews/nude.png) | [<NSFW, click to see>](3100/previews/nude2.png) |  |  |
| 2480 | 0.835 | [Download](2480/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2480/previews/nude.png) | [<NSFW, click to see>](2480/previews/nude2.png) |  |  |
| 1860 | 0.757 | [Download](1860/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1860/previews/nude.png) | [<NSFW, click to see>](1860/previews/nude2.png) |  |  |
| 1240 | 0.765 | [Download](1240/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1240/previews/nude.png) | [<NSFW, click to see>](1240/previews/nude2.png) |  |  |
| 620 | 0.797 | [Download](620/anisphia_wynn_palettia_tenseioujototensaireijounomahoukakumei.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](620/previews/nude.png) | [<NSFW, click to see>](620/previews/nude2.png) |  |  |
|
bigmorning/wh_4_sun_syl_w_0_lr_8en5_b32_0030 | bigmorning | 2023-09-13T17:35:46Z | 60 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-13T17:35:37Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: wh_4_sun_syl_w_0_lr_8en5_b32_0030
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# wh_4_sun_syl_w_0_lr_8en5_b32_0030
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3105
- Train Accuracy: 0.0330
- Train Wermet: 0.0789
- Train Wermet Syl: 0.1339
- Validation Loss: 1.1690
- Validation Accuracy: 0.0207
- Validation Wermet: 0.3261
- Validation Wermet Syl: 0.3084
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 8e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Train Wermet Syl | Validation Loss | Validation Accuracy | Validation Wermet | Validation Wermet Syl | Epoch |
|:----------:|:--------------:|:------------:|:----------------:|:---------------:|:-------------------:|:-----------------:|:---------------------:|:-----:|
| 5.2081 | 0.0105 | 1.5151 | 1.1936 | 4.0143 | 0.0113 | 0.9817 | 0.9765 | 0 |
| 4.7532 | 0.0116 | 0.8756 | 0.8371 | 3.9494 | 0.0114 | 0.9457 | 0.9171 | 1 |
| 4.7031 | 0.0117 | 0.8615 | 0.8175 | 3.9244 | 0.0114 | 0.9383 | 0.8993 | 2 |
| 4.6799 | 0.0117 | 0.8569 | 0.8117 | 3.9091 | 0.0114 | 0.9659 | 0.9487 | 3 |
| 4.6415 | 0.0117 | 0.8450 | 0.7985 | 3.8828 | 0.0115 | 0.9227 | 0.8867 | 4 |
| 4.6194 | 0.0118 | 0.8366 | 0.7901 | 3.8588 | 0.0115 | 0.9314 | 0.8973 | 5 |
| 4.5993 | 0.0118 | 0.8253 | 0.7803 | 3.9068 | 0.0116 | 0.9000 | 0.8526 | 6 |
| 4.5584 | 0.0120 | 0.7864 | 0.7455 | 3.7567 | 0.0118 | 0.8407 | 0.7907 | 7 |
| 4.4423 | 0.0123 | 0.7655 | 0.7209 | 3.5099 | 0.0123 | 0.8256 | 0.7786 | 8 |
| 4.0719 | 0.0133 | 0.7479 | 0.7112 | 2.9385 | 0.0135 | 0.7636 | 0.7193 | 9 |
| 3.5177 | 0.0148 | 0.7049 | 0.6917 | 2.4295 | 0.0148 | 0.6907 | 0.6559 | 10 |
| 2.8931 | 0.0170 | 0.6532 | 0.6928 | 2.0341 | 0.0161 | 0.6031 | 0.5778 | 11 |
| 2.4018 | 0.0190 | 0.6050 | 0.7024 | 1.7053 | 0.0174 | 0.5373 | 0.5223 | 12 |
| 1.9940 | 0.0210 | 0.5579 | 0.7168 | 1.5422 | 0.0180 | 0.4803 | 0.4793 | 13 |
| 1.7258 | 0.0223 | 0.5356 | 0.7426 | 1.4314 | 0.0186 | 0.4270 | 0.4029 | 14 |
| 1.4823 | 0.0237 | 0.4865 | 0.7094 | 1.3495 | 0.0190 | 0.4146 | 0.4082 | 15 |
| 1.2444 | 0.0252 | 0.4278 | 0.6455 | 1.2070 | 0.0198 | 0.3945 | 0.3965 | 16 |
| 1.1179 | 0.0261 | 0.3774 | 0.5791 | 1.1574 | 0.0200 | 0.3679 | 0.3479 | 17 |
| 1.0182 | 0.0267 | 0.3377 | 0.5283 | 1.3384 | 0.0189 | 0.3799 | 0.3800 | 18 |
| 0.9117 | 0.0276 | 0.2949 | 0.4680 | 1.2758 | 0.0192 | 0.4053 | 0.4485 | 19 |
| 0.7453 | 0.0290 | 0.2613 | 0.4292 | 1.1716 | 0.0203 | 0.3453 | 0.3306 | 20 |
| 0.6662 | 0.0296 | 0.2193 | 0.3666 | 1.1194 | 0.0205 | 0.3391 | 0.3188 | 21 |
| 0.5670 | 0.0305 | 0.1841 | 0.3121 | 1.2457 | 0.0200 | 0.3477 | 0.3347 | 22 |
| 0.5777 | 0.0304 | 0.1711 | 0.2936 | 1.2328 | 0.0201 | 0.3512 | 0.3321 | 23 |
| 0.4963 | 0.0312 | 0.1340 | 0.2298 | 1.4423 | 0.0189 | 0.3788 | 0.3437 | 24 |
| 0.4911 | 0.0311 | 0.1337 | 0.2363 | 1.1022 | 0.0208 | 0.3216 | 0.3001 | 25 |
| 0.3529 | 0.0326 | 0.1065 | 0.2027 | 1.1164 | 0.0209 | 0.3236 | 0.2979 | 26 |
| 0.3296 | 0.0328 | 0.0915 | 0.1748 | 1.1417 | 0.0208 | 0.3238 | 0.3087 | 27 |
| 0.2417 | 0.0337 | 0.0661 | 0.1297 | 1.1344 | 0.0210 | 0.3181 | 0.2957 | 28 |
| 0.3105 | 0.0330 | 0.0789 | 0.1339 | 1.1690 | 0.0207 | 0.3261 | 0.3084 | 29 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
Ajhl1992/lilt-en-funsd | Ajhl1992 | 2023-09-13T17:23:41Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"dataset:funsd-layoutlmv3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-13T16:52:39Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- funsd-layoutlmv3
model-index:
- name: lilt-en-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6250
- Answer: {'precision': 0.8670520231213873, 'recall': 0.9179926560587516, 'f1': 0.89179548156956, 'number': 817}
- Header: {'precision': 0.6796116504854369, 'recall': 0.5882352941176471, 'f1': 0.6306306306306307, 'number': 119}
- Question: {'precision': 0.902867715078631, 'recall': 0.9062209842154132, 'f1': 0.9045412418906396, 'number': 1077}
- Overall Precision: 0.8765
- Overall Recall: 0.8922
- Overall F1: 0.8843
- Overall Accuracy: 0.8191
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4102 | 10.53 | 200 | 0.9780 | {'precision': 0.794341675734494, 'recall': 0.8935128518971848, 'f1': 0.8410138248847926, 'number': 817} | {'precision': 0.6351351351351351, 'recall': 0.3949579831932773, 'f1': 0.48704663212435234, 'number': 119} | {'precision': 0.8618834080717489, 'recall': 0.8922934076137419, 'f1': 0.8768248175182481, 'number': 1077} | 0.8245 | 0.8634 | 0.8435 | 0.8098 |
| 0.0415 | 21.05 | 400 | 1.2998 | {'precision': 0.8573113207547169, 'recall': 0.8898408812729498, 'f1': 0.8732732732732732, 'number': 817} | {'precision': 0.5887850467289719, 'recall': 0.5294117647058824, 'f1': 0.5575221238938053, 'number': 119} | {'precision': 0.8603256212510711, 'recall': 0.9322191272051996, 'f1': 0.894830659536542, 'number': 1077} | 0.8454 | 0.8912 | 0.8677 | 0.8044 |
| 0.0138 | 31.58 | 600 | 1.4296 | {'precision': 0.8388952819332566, 'recall': 0.8922888616891065, 'f1': 0.8647686832740212, 'number': 817} | {'precision': 0.5185185185185185, 'recall': 0.7058823529411765, 'f1': 0.597864768683274, 'number': 119} | {'precision': 0.906158357771261, 'recall': 0.8607242339832869, 'f1': 0.8828571428571429, 'number': 1077} | 0.8471 | 0.8644 | 0.8557 | 0.8033 |
| 0.0071 | 42.11 | 800 | 1.5437 | {'precision': 0.8325991189427313, 'recall': 0.9253365973072215, 'f1': 0.8765217391304347, 'number': 817} | {'precision': 0.6593406593406593, 'recall': 0.5042016806722689, 'f1': 0.5714285714285715, 'number': 119} | {'precision': 0.8944392082940622, 'recall': 0.8811513463324049, 'f1': 0.8877455565949485, 'number': 1077} | 0.8568 | 0.8768 | 0.8667 | 0.8003 |
| 0.0035 | 52.63 | 1000 | 1.6306 | {'precision': 0.8327832783278328, 'recall': 0.9265605875152999, 'f1': 0.8771726535341833, 'number': 817} | {'precision': 0.6509433962264151, 'recall': 0.5798319327731093, 'f1': 0.6133333333333333, 'number': 119} | {'precision': 0.9034676663542643, 'recall': 0.8950789229340761, 'f1': 0.8992537313432836, 'number': 1077} | 0.8598 | 0.8892 | 0.8742 | 0.7967 |
| 0.0022 | 63.16 | 1200 | 1.6872 | {'precision': 0.8472063854047891, 'recall': 0.9094247246022031, 'f1': 0.8772136953955136, 'number': 817} | {'precision': 0.6363636363636364, 'recall': 0.5294117647058824, 'f1': 0.5779816513761468, 'number': 119} | {'precision': 0.9077212806026366, 'recall': 0.8950789229340761, 'f1': 0.9013557737260401, 'number': 1077} | 0.8685 | 0.8793 | 0.8739 | 0.7997 |
| 0.0021 | 73.68 | 1400 | 1.6366 | {'precision': 0.8106060606060606, 'recall': 0.9167686658506732, 'f1': 0.8604250430786904, 'number': 817} | {'precision': 0.5904761904761905, 'recall': 0.5210084033613446, 'f1': 0.5535714285714286, 'number': 119} | {'precision': 0.8941605839416058, 'recall': 0.9099350046425255, 'f1': 0.9019788311090657, 'number': 1077} | 0.8428 | 0.8897 | 0.8656 | 0.8054 |
| 0.0011 | 84.21 | 1600 | 1.5864 | {'precision': 0.8795180722891566, 'recall': 0.8935128518971848, 'f1': 0.8864602307225258, 'number': 817} | {'precision': 0.6481481481481481, 'recall': 0.5882352941176471, 'f1': 0.6167400881057269, 'number': 119} | {'precision': 0.8894783377541998, 'recall': 0.9340761374187558, 'f1': 0.911231884057971, 'number': 1077} | 0.8729 | 0.8972 | 0.8849 | 0.8194 |
| 0.0005 | 94.74 | 1800 | 1.5746 | {'precision': 0.8587699316628702, 'recall': 0.9228886168910648, 'f1': 0.8896755162241888, 'number': 817} | {'precision': 0.66, 'recall': 0.5546218487394958, 'f1': 0.6027397260273973, 'number': 119} | {'precision': 0.9055045871559633, 'recall': 0.9164345403899722, 'f1': 0.9109367789570835, 'number': 1077} | 0.8738 | 0.8977 | 0.8856 | 0.8254 |
| 0.0004 | 105.26 | 2000 | 1.6031 | {'precision': 0.8669778296382731, 'recall': 0.9094247246022031, 'f1': 0.8876941457586618, 'number': 817} | {'precision': 0.6173913043478261, 'recall': 0.5966386554621849, 'f1': 0.6068376068376068, 'number': 119} | {'precision': 0.904363974001857, 'recall': 0.904363974001857, 'f1': 0.904363974001857, 'number': 1077} | 0.8726 | 0.8882 | 0.8804 | 0.8218 |
| 0.0003 | 115.79 | 2200 | 1.6122 | {'precision': 0.8632183908045977, 'recall': 0.9192166462668299, 'f1': 0.890337877889745, 'number': 817} | {'precision': 0.6831683168316832, 'recall': 0.5798319327731093, 'f1': 0.6272727272727273, 'number': 119} | {'precision': 0.9016544117647058, 'recall': 0.9108635097493036, 'f1': 0.9062355658198614, 'number': 1077} | 0.8747 | 0.8947 | 0.8846 | 0.8221 |
| 0.0002 | 126.32 | 2400 | 1.6250 | {'precision': 0.8670520231213873, 'recall': 0.9179926560587516, 'f1': 0.89179548156956, 'number': 817} | {'precision': 0.6796116504854369, 'recall': 0.5882352941176471, 'f1': 0.6306306306306307, 'number': 119} | {'precision': 0.902867715078631, 'recall': 0.9062209842154132, 'f1': 0.9045412418906396, 'number': 1077} | 0.8765 | 0.8922 | 0.8843 | 0.8191 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Jayem-11/wav2vec2-base-dvoice-s2t-kaggle | Jayem-11 | 2023-09-13T17:16:01Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-11T11:27:50Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-dvoice-s2t-kaggle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-dvoice-s2t-kaggle
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.14.1
- Tokenizers 0.13.3
|
CyberHarem/yuri_pokemon | CyberHarem | 2023-09-13T17:11:12Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/yuri_pokemon",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-13T16:55:08Z | ---
license: mit
datasets:
- CyberHarem/yuri_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yuri_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3840, you need to download `3840/yuri_pokemon.pt` as the embedding and `3840/yuri_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3840**, with the score of 0.799. The trigger words are:
1. `yuri_pokemon`
2. `brown_hair, short_hair, bangs, brown_eyes, hat, green_headwear, tam_o'_shanter, smile, cardigan, grey_cardigan, blush, open_mouth, bob_cut, backpack, bag`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7200 | 0.792 | [Download](7200/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| 6720 | 0.788 | [Download](6720/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) |  |  |
| 6240 | 0.769 | [Download](6240/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5760 | 0.770 | [Download](5760/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5280 | 0.786 | [Download](5280/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4800 | 0.751 | [Download](4800/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4320 | 0.738 | [Download](4320/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| **3840** | **0.799** | [**Download**](3840/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3360 | 0.751 | [Download](3360/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| 2880 | 0.768 | [Download](2880/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2400 | 0.796 | [Download](2400/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1920 | 0.757 | [Download](1920/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1440 | 0.843 | [Download](1440/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 960 | 0.735 | [Download](960/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](960/previews/nude.png) | [<NSFW, click to see>](960/previews/nude2.png) |  |  |
| 480 | 0.734 | [Download](480/yuri_pokemon.zip) |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](480/previews/nude.png) | [<NSFW, click to see>](480/previews/nude2.png) |  |  |
|
u6ujh6h6/ddddd | u6ujh6h6 | 2023-09-13T17:11:08Z | 0 | 0 | null | [
"text-generation",
"dataset:codefuse-ai/CodeExercise-Python-27k",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"dataset:mhhmm/leetcode-solutions-python",
"dataset:mlabonne/Evol-Instruct-Python-26k",
"dataset:Nan-Do/instructional_code-search-net-python",
"dataset:Arjun-G-Ravi/Python-codes",
"dataset:reshinthadith/pairwise-code-review-instruct-critique-revision-python",
"dataset:iamketan25/python-qa-instructions-dataset",
"doi:10.57967/hf/1100",
"region:us"
]
| text-generation | 2023-09-13T17:09:46Z | ---
datasets:
- codefuse-ai/CodeExercise-Python-27k
- iamtarun/python_code_instructions_18k_alpaca
- mhhmm/leetcode-solutions-python
- mlabonne/Evol-Instruct-Python-26k
- Nan-Do/instructional_code-search-net-python
- Arjun-G-Ravi/Python-codes
- reshinthadith/pairwise-code-review-instruct-critique-revision-python
- iamketan25/python-qa-instructions-dataset
pipeline_tag: text-generation
--- |
nirbhayfaaya/faaya-controlnet-lineart-print-generation | nirbhayfaaya | 2023-09-13T17:08:27Z | 0 | 0 | diffusers | [
"diffusers",
"region:us"
]
| null | 2023-09-12T12:47:50Z | The model card contains the model for lineart control for print generation |
mskhattori/hubert-rinnna-jp-jdrtsp-fw07sp-12 | mskhattori | 2023-09-13T16:50:11Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:rinna/japanese-hubert-base",
"base_model:finetune:rinna/japanese-hubert-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-13T14:24:07Z | ---
license: apache-2.0
base_model: rinna/japanese-hubert-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: hubert-rinnna-jp-jdrtsp-fw07sp-12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-rinnna-jp-jdrtsp-fw07sp-12
This model is a fine-tuned version of [rinna/japanese-hubert-base](https://huggingface.co/rinna/japanese-hubert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1989
- Wer: 0.6801
- Cer: 0.5794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 5.0318 | 1.0 | 404 | 4.2999 | 0.9798 | 0.9889 |
| 3.5113 | 2.0 | 808 | 3.3289 | 0.9798 | 0.9889 |
| 2.7536 | 3.0 | 1212 | 2.7007 | 0.9798 | 0.9889 |
| 2.4826 | 4.0 | 1616 | 2.3732 | 0.9798 | 0.9889 |
| 2.0642 | 5.0 | 2020 | 1.9165 | 0.9798 | 0.9888 |
| 1.834 | 6.0 | 2424 | 1.6739 | 0.9504 | 0.9464 |
| 1.6869 | 7.0 | 2828 | 1.4651 | 0.8239 | 0.7865 |
| 1.5734 | 8.0 | 3232 | 1.3267 | 0.7440 | 0.6939 |
| 1.5052 | 9.0 | 3636 | 1.2331 | 0.7045 | 0.6231 |
| 1.4573 | 10.0 | 4040 | 1.1989 | 0.6801 | 0.5794 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bigmorning/wh_4_sun_syl_w_0_lr_8en5_b32_0015 | bigmorning | 2023-09-13T16:49:11Z | 59 | 0 | transformers | [
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-13T16:49:03Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: wh_4_sun_syl_w_0_lr_8en5_b32_0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# wh_4_sun_syl_w_0_lr_8en5_b32_0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7258
- Train Accuracy: 0.0223
- Train Wermet: 0.5356
- Train Wermet Syl: 0.7426
- Validation Loss: 1.4314
- Validation Accuracy: 0.0186
- Validation Wermet: 0.4270
- Validation Wermet Syl: 0.4029
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 8e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Train Wermet Syl | Validation Loss | Validation Accuracy | Validation Wermet | Validation Wermet Syl | Epoch |
|:----------:|:--------------:|:------------:|:----------------:|:---------------:|:-------------------:|:-----------------:|:---------------------:|:-----:|
| 5.2081 | 0.0105 | 1.5151 | 1.1936 | 4.0143 | 0.0113 | 0.9817 | 0.9765 | 0 |
| 4.7532 | 0.0116 | 0.8756 | 0.8371 | 3.9494 | 0.0114 | 0.9457 | 0.9171 | 1 |
| 4.7031 | 0.0117 | 0.8615 | 0.8175 | 3.9244 | 0.0114 | 0.9383 | 0.8993 | 2 |
| 4.6799 | 0.0117 | 0.8569 | 0.8117 | 3.9091 | 0.0114 | 0.9659 | 0.9487 | 3 |
| 4.6415 | 0.0117 | 0.8450 | 0.7985 | 3.8828 | 0.0115 | 0.9227 | 0.8867 | 4 |
| 4.6194 | 0.0118 | 0.8366 | 0.7901 | 3.8588 | 0.0115 | 0.9314 | 0.8973 | 5 |
| 4.5993 | 0.0118 | 0.8253 | 0.7803 | 3.9068 | 0.0116 | 0.9000 | 0.8526 | 6 |
| 4.5584 | 0.0120 | 0.7864 | 0.7455 | 3.7567 | 0.0118 | 0.8407 | 0.7907 | 7 |
| 4.4423 | 0.0123 | 0.7655 | 0.7209 | 3.5099 | 0.0123 | 0.8256 | 0.7786 | 8 |
| 4.0719 | 0.0133 | 0.7479 | 0.7112 | 2.9385 | 0.0135 | 0.7636 | 0.7193 | 9 |
| 3.5177 | 0.0148 | 0.7049 | 0.6917 | 2.4295 | 0.0148 | 0.6907 | 0.6559 | 10 |
| 2.8931 | 0.0170 | 0.6532 | 0.6928 | 2.0341 | 0.0161 | 0.6031 | 0.5778 | 11 |
| 2.4018 | 0.0190 | 0.6050 | 0.7024 | 1.7053 | 0.0174 | 0.5373 | 0.5223 | 12 |
| 1.9940 | 0.0210 | 0.5579 | 0.7168 | 1.5422 | 0.0180 | 0.4803 | 0.4793 | 13 |
| 1.7258 | 0.0223 | 0.5356 | 0.7426 | 1.4314 | 0.0186 | 0.4270 | 0.4029 | 14 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
jtlicardo/bert-finetuned-bpmn | jtlicardo | 2023-09-13T16:25:47Z | 128 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-12-10T16:08:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
widget:
- text: The process starts when the customer enters the shop. The customer then takes
the product from the shelf. The customer then pays for the product and leaves
the store.
example_title: Example 1
- text: The process begins when the HR department hires the new employee. Next, the
new employee completes necessary paperwork and provides documentation to the HR
department. After the initial task, the HR department performs a decision to
determine the employee's role and department assignment. The employee is trained
on the company's sales processes and systems by the Sales department. After the
training, the Sales department assigns the employee a sales quota and performance
goals. Finally, the process ends with an 'End' event, when the employee begins
their role in the Sales department.
example_title: Example 2
- text: The process begins with a 'Start' event, when a customer places an order for
a product on the company's website. Next, the customer service department checks
the availability of the product and confirms the order with the customer. After
the initial task, the warehouse processes the order. If the order is eligible
for same-day shipping, the warehouse staff picks and packs the order, and it is
sent to the shipping department. After the order is packed, the shipping department
arranges for the order to be delivered to the customer. Finally, the process ends
with an 'End' event, when the customer receives their order.
example_title: Example 3
base_model: bert-base-cased
model-index:
- name: bert-finetuned-bpmn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-bpmn
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on a dataset containing textual process descriptions.
The dataset contains 2 target labels:
* `AGENT`
* `TASK`
The dataset (and the notebook used for training) can be found on the following GitHub repo: https://github.com/jtlicardo/bert-finetuned-bpmn
Update: a model trained on 5 BPMN-specific labels can be found here: https://huggingface.co/jtlicardo/bpmn-information-extraction
The model achieves the following results on the evaluation set:
- Loss: 0.2656
- Precision: 0.7314
- Recall: 0.8366
- F1: 0.7805
- Accuracy: 0.8939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.8437 | 0.1899 | 0.3203 | 0.2384 | 0.7005 |
| No log | 2.0 | 20 | 0.4967 | 0.5421 | 0.7582 | 0.6322 | 0.8417 |
| No log | 3.0 | 30 | 0.3403 | 0.6719 | 0.8431 | 0.7478 | 0.8867 |
| No log | 4.0 | 40 | 0.2821 | 0.6923 | 0.8235 | 0.7522 | 0.8903 |
| No log | 5.0 | 50 | 0.2656 | 0.7314 | 0.8366 | 0.7805 | 0.8939 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
huawei-noah/EntityCS-39-MLM-xlmr-base | huawei-noah | 2023-09-13T16:23:30Z | 270 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"af",
"ar",
"bg",
"bn",
"de",
"el",
"en",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"he",
"hi",
"hu",
"id",
"it",
"ja",
"jv",
"ka",
"kk",
"ko",
"ml",
"mr",
"ms",
"my",
"nl",
"pt",
"ru",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ur",
"vi",
"yo",
"zh",
"arxiv:1904.09223",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-07T09:36:59Z | ---
license: apache-2.0
language:
- af
- ar
- bg
- bn
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fr
- he
- hi
- hu
- id
- it
- ja
- jv
- ka
- kk
- ko
- ml
- mr
- ms
- my
- nl
- pt
- ru
- sw
- ta
- te
- th
- tl
- tr
- ur
- vi
- yo
- zh
---
# Model Card for EntityCS-39-MLM-xlmr-base
- Paper: https://aclanthology.org/2022.findings-emnlp.499.pdf
- Repository: https://github.com/huawei-noah/noah-research/tree/master/NLP/EntityCS
- Point of Contact: [Fenia Christopoulou](mailto:[email protected]), [Chenxi Whitehouse](mailto:[email protected])
## Model Description
This model has been trained on the EntityCS corpus, an English corpus from Wikipedia with replaced entities in different languages.
The corpus can be found in [https://huggingface.co/huawei-noah/entity_cs](https://huggingface.co/huawei-noah/entity_cs), check the link for more details.
To train models on the corpus, we first employ the conventional 80-10-10 MLM objective, where 15% of sentence subwords are considered as masking candidates. From those, we replace subwords
with [MASK] 80% of the time, with Random subwords (from the entire vocabulary) 10% of the time, and leave the remaining 10% unchanged (Same).
To integrate entity-level cross-lingual knowledge into the model, we propose Entity Prediction objectives, where we only mask subwords belonging
to an entity. By predicting the masked entities in EntityCS sentences, we expect the model to capture the semantics of the same entity in different
languages.
Two different masking strategies are proposed for predicting entities: Whole Entity Prediction (`WEP`) and Partial Entity Prediction (`PEP`).
In WEP, motivated by [Sun et al. (2019)](https://arxiv.org/abs/1904.09223) where whole word masking is also adopted, we consider all the words (and consequently subwords) inside
an entity as masking candidates. Then, 80% of the time we mask every subword inside an entity, and
20% of the time we keep the subwords intact. Note that, as our goal is to predict the entire masked
entity, we do not allow replacing with Random subwords, since it can introduce noise and result
in the model predicting incorrect entities. After entities are masked, we remove the entity indicators
`<e>`, `</e>` from the sentences before feeding them to the model.
For PEP, we also consider all entities as masking candidates. In contrast to WEP, we do not force
subwords belonging to one entity to be either all masked or all unmasked. Instead, each individual
entity subword is masked 80% of the time. For the remaining 20% of the masking candidates, we experiment with three different replacements. First,
PEP<sub>MRS</sub>, corresponds to the conventional 80-10-10 masking strategy, where 10% of the remaining
subwords are replaced with Random subwords and the other 10% are kept unchanged. In the second
setting, PEP<sub>MS</sub>, we remove the 10% Random subwords substitution, i.e. we predict the 80% masked
subwords and 10% Same subwords from the masking candidates. In the third setting, PEP<sub>M</sub>, we
further remove the 10% Same subwords prediction, essentially predicting only the masked subwords.
Prior work has proven it is effective to combine Entity Prediction with MLM for cross-lingual transfer ([Jiang et al., 2020](https://aclanthology.org/2020.emnlp-main.479/)), therefore we investigate the
combination of the Entity Prediction objectives together with MLM on non-entity subwords. Specifically, when combined with MLM, we lower the
entity masking probability (p) to 50% to roughly keep the same overall masking percentage.
This results into the following objectives: WEP + MLM, PEP<sub>MRS</sub> + MLM, PEP<sub>MS</sub> + MLM, PEP<sub>M</sub> + MLM
This model was trained with the **MLM** objective on the EntityCS corpus with 39 languages.
## Training Details
We start from the [XLM-R-base](https://huggingface.co/xlm-roberta-base) model and train for 1 epoch on 8 Nvidia V100 32GB GPUs.
We set batch size to 16 and gradient accumulation steps to 2, resulting in an effective batch size of 256.
For speedup we use fp16 mixed precision.
We use the sampling strategy proposed by [Conneau and Lample (2019)](https://dl.acm.org/doi/pdf/10.5555/3454287.3454921), where high resource languages are down-sampled and low
resource languages get sampled more frequently.
We only train the embedding and the last two layers of the model.
We randomly choose 100 sentences from each language to serve as a validation set, on which we measure the perplexity every 10K training steps.
**This checkpoint corresponds to the one with the lower perplexity on the validation set.**
## Usage
The current model can be used for further fine-tuning on downstream tasks.
In the paper, we focused on entity-related tasks, such as NER, Word Sense Disambiguation and Slot Filling.
Alternatively, it can be used directly (no fine-tuning) for probing tasks, i.e. predict missing words, such as [X-FACTR](https://aclanthology.org/2020.emnlp-main.479/).
For results on each downstream task, please refer to the [paper](https://aclanthology.org/2022.findings-emnlp.499.pdf).
## How to Get Started with the Model
Use the code below to get started with training: https://github.com/huawei-noah/noah-research/tree/master/NLP/EntityCS
## Citation
**BibTeX**
```html
@inproceedings{whitehouse-etal-2022-entitycs,
title = "{E}ntity{CS}: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching",
author = "Whitehouse, Chenxi and
Christopoulou, Fenia and
Iacobacci, Ignacio",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.499",
pages = "6698--6714"
}
```
**APA**
```html
Whitehouse, C., Christopoulou, F., & Iacobacci, I. (2022). EntityCS: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching. In Findings of the Association for Computational Linguistics: EMNLP 2022.
```
|
LarryAIDraw/MomozonoMomoka | LarryAIDraw | 2023-09-13T16:20:35Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-13T16:08:23Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/144293/momozono-momokadokyuu-hentai-hxeros |
LarryAIDraw/Tsunade | LarryAIDraw | 2023-09-13T16:20:18Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-13T16:07:25Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/144335/tsunade-senju-naruto |
LarryAIDraw/Kmiyako0005 | LarryAIDraw | 2023-09-13T16:20:06Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-13T16:06:53Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/144399/miyako-sonobisquedollwakoiwosuru |
LarryAIDraw/MahoukaAngelina-v2-06 | LarryAIDraw | 2023-09-13T16:19:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-13T16:06:02Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/144428/mahouka-koukou-no-rettousei-angelina-kudou-shields-3-outfits |
LarryAIDraw/ciel | LarryAIDraw | 2023-09-13T16:18:25Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-13T16:03:07Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/144702/characterciel-i-synduality-noir |
LarryAIDraw/Tang_Xin | LarryAIDraw | 2023-09-13T16:17:56Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-13T16:04:00Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/144838/tang-xin-or-i-use-my-fake-girlfriend-as-a-shield-manhua |
Sachin16/dqn-SpaceInvadersNoFrameskip-v4 | Sachin16 | 2023-09-13T16:00:45Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-13T16:00:15Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 257.00 +/- 38.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sachin16 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sachin16 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Sachin16
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
texasdave2/distilbert-base-uncased-squad_v2 | texasdave2 | 2023-09-13T15:59:02Z | 62 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-13T15:58:38Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-base-uncased-squad_v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad_v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
paragon-analytics/llama2_7b_pf | paragon-analytics | 2023-09-13T15:58:08Z | 6 | 1 | peft | [
"peft",
"region:us"
]
| null | 2023-09-13T14:58:30Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
Lamurias/Reinforce-pixelcopter | Lamurias | 2023-09-13T15:55:29Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-13T15:55:25Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 36.00 +/- 30.98
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
gokoV/whisper-tiny-us_handson | gokoV | 2023-09-13T15:54:30Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-13T09:38:59Z | ---
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: Whisper tiny US - handson_tut
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14-en-US
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 44.23529411764706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper tiny US - handson_tut
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14-en-US dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7488
- Wer Ortho: 43.1181
- Wer: 44.2353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.0007 | 17.86 | 500 | 0.7488 | 43.1181 | 44.2353 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Lethargus/ppo-SnowballTarget | Lethargus | 2023-09-13T15:51:18Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-09-13T15:51:15Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Lethargus/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Terps/ppo-SnowballTarget | Terps | 2023-09-13T15:49:28Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-09-13T15:29:46Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Terps/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
buddhilive/bert-nopretrained-squad | buddhilive | 2023-09-13T15:39:25Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:buddhilive/bert-base-zero",
"base_model:finetune:buddhilive/bert-base-zero",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-10T10:34:44Z | ---
base_model: buddhilive/bert-base-zero
tags:
- generated_from_keras_callback
model-index:
- name: buddhilive/bert-nopretrained-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# buddhilive/bert-nopretrained-squad
This model is a fine-tuned version of [buddhilive/bert-base-zero](https://huggingface.co/buddhilive/bert-base-zero) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.7831
- Validation Loss: 0.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.2872 | 0.0 | 0 |
| 3.8920 | 0.0 | 1 |
| 3.7831 | 0.0 | 2 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
LeBenchmark/wav2vec2-FR-14K-xlarge | LeBenchmark | 2023-09-13T15:35:45Z | 11 | 1 | transformers | [
"transformers",
"wav2vec2",
"pretraining",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-10-31T20:23:58Z | ---
language: "fr"
thumbnail:
tags:
- wav2vec2
license: "apache-2.0"
---
# LeBenchmark 2.0: wav2vec2 xlarge model trained on 14K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
andreipb/roberta-poetry-life-crpo | andreipb | 2023-09-13T15:32:44Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T14:59:30Z | ---
license: mit
language:
- en
pipeline_tag: fill-mask
library_name: transformers
widget:
- text: "This morning, the CEO was <mask>."
example_title: "Example 1"
- text: "Yesterday, all the students were <mask> in the park."
example_title: "Example 2"
- text: "All the children seemed <mask>."
example_title: "Example 3"
- text: "I opened the door and found a <mask> behind it."
example_title: "Example 4"
- text: "We went to see the <mask> movie."
example_title: "Example 5"
---
# roberta-poetry-life-crpo
This model is based on the [RoBERTa base model](https://huggingface.co/roberta-base) (125M parameters)
fine-tuned for 20 epochs on a poetry dataset of 14 MB. This dataset was extracted from
the [Gutenberg Poetry Corpus](https://github.com/aparrish/gutenberg-poetry-corpus) using an automatic classifier
for poems in relation with the topic of **life and death**.
The model replaces a masked word, indicated by the `<mask>` tag, with a word associated with **life and death**, while preserving fluency.
Caution: the topic (here, **life and death**) only biases the choice of words with respect to the base model, but do not expect to find
only words strongly associated to this topic.
This model was trained by [Teo Ferrari](https://www.linkedin.com/in/teo-ferrari-0a4009176/)
as part of his Bachelor thesis at [HEIG-VD](https://gaps.heig-vd.ch/public/diplome/rapports.php?id=6763),
supervised by [Andrei Popescu-Belis](http://iict-space.heig-vd.ch/apu/).
The model is described in "[GPoeT: a Language Model Trained for Rhyme Generation on Synthetic Data](https://aclanthology.org/2023.latechclfl-1.2/)"
and is used in the [CR-PO](https://github.com/heig-iict-ida/crpo) system for [interactive poem generation](https://aclanthology.org/2022.lrec-1.377),
along with several other models for specific topics or emotions.
|
andreipb/roberta-poetry-love-crpo | andreipb | 2023-09-13T15:32:17Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-13T15:07:39Z | ---
license: mit
language:
- en
pipeline_tag: fill-mask
library_name: transformers
widget:
- text: "This morning, the CEO was <mask>."
example_title: "Example 1"
- text: "Yesterday, all the students were <mask> in the park."
example_title: "Example 2"
- text: "All the children seemed <mask>."
example_title: "Example 3"
- text: "I opened the door and found a <mask> behind it."
example_title: "Example 4"
- text: "We went to see the <mask> movie."
example_title: "Example 5"
---
# roberta-poetry-love-crpo
This model is based on the [RoBERTa base model](https://huggingface.co/roberta-base) (125M parameters)
fine-tuned for 20 epochs on a poetry dataset of 4 MB. This dataset was extracted from
the [Gutenberg Poetry Corpus](https://github.com/aparrish/gutenberg-poetry-corpus) using an automatic classifier
for poems in relation with the topic of **love**.
The model replaces a masked word, indicated by the `<mask>` tag, with a word associated with **love**, while preserving fluency.
Caution: the topic (here, **love**) only biases the choice of words with respect to the base model, but do not expect to find
only words strongly associated to this topic.
This model was trained by [Teo Ferrari](https://www.linkedin.com/in/teo-ferrari-0a4009176/)
as part of his Bachelor thesis at [HEIG-VD](https://gaps.heig-vd.ch/public/diplome/rapports.php?id=6763),
supervised by [Andrei Popescu-Belis](http://iict-space.heig-vd.ch/apu/).
The model is described in "[GPoeT: a Language Model Trained for Rhyme Generation on Synthetic Data](https://aclanthology.org/2023.latechclfl-1.2/)"
and is used in the [CR-PO](https://github.com/heig-iict-ida/crpo) system for [interactive poem generation](https://aclanthology.org/2022.lrec-1.377),
along with several other models for specific topics or emotions.
|
huawei-noah/EntityCS-39-PEP_MS_MLM-xlmr-base | huawei-noah | 2023-09-13T15:20:06Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"af",
"ar",
"bg",
"bn",
"de",
"el",
"en",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"he",
"hi",
"hu",
"id",
"it",
"ja",
"jv",
"ka",
"kk",
"ko",
"ml",
"mr",
"ms",
"my",
"nl",
"pt",
"ru",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ur",
"vi",
"yo",
"zh",
"arxiv:1904.09223",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-07T09:40:23Z | ---
license: apache-2.0
language:
- af
- ar
- bg
- bn
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fr
- he
- hi
- hu
- id
- it
- ja
- jv
- ka
- kk
- ko
- ml
- mr
- ms
- my
- nl
- pt
- ru
- sw
- ta
- te
- th
- tl
- tr
- ur
- vi
- yo
- zh
---
# Model Card for EntityCS-39-PEP_MS_MLM-xlmr-base
This model has been trained on the EntityCS corpus, an English corpus from Wikipedia with replaced entities in different languages.
The corpus can be found in [https://huggingface.co/huawei-noah/entity_cs](https://huggingface.co/huawei-noah/entity_cs), check the link for more details.
Firstly, we employ the conventional 80-10-10 MLM objective, where 15% of sentence subwords are considered as masking candidates. From those, we replace subwords
with [MASK] 80% of the time, with Random subwords (from the entire vocabulary) 10% of the time, and leave the remaining 10% unchanged (Same).
To integrate entity-level cross-lingual knowledge into the model, we propose Entity Prediction objectives, where we only mask subwords belonging
to an entity. By predicting the masked entities in ENTITYCS sentences, we expect the model to capture the semantics of the same entity in different
languages.
Two different masking strategies are proposed for predicting entities: Whole Entity Prediction (`WEP`) and Partial Entity Prediction (`PEP`).
In WEP, motivated by [Sun et al. (2019)](https://arxiv.org/abs/1904.09223) where whole word masking is also adopted, we consider all the words (and consequently subwords) inside
an entity as masking candidates. Then, 80% of the time we mask every subword inside an entity, and
20% of the time we keep the subwords intact. Note that, as our goal is to predict the entire masked
entity, we do not allow replacing with Random subwords, since it can introduce noise and result
in the model predicting incorrect entities. After entities are masked, we remove the entity indicators
`<e>`, `</e>` from the sentences before feeding them to the model.
For PEP, we also consider all entities as masking candidates. In contrast to WEP, we do not force
subwords belonging to one entity to be either all masked or all unmasked. Instead, each individual
entity subword is masked 80% of the time. For the remaining 20% of the masking candidates, we experiment with three different replacements. First,
PEP<sub>MRS</sub>, corresponds to the conventional 80-10-10 masking strategy, where 10% of the remaining
subwords are replaced with Random subwords and the other 10% are kept unchanged. In the second
setting, PEP<sub>MS</sub>, we remove the 10% Random subwords substitution, i.e. we predict the 80% masked
subwords and 10% Same subwords from the masking candidates. In the third setting, PEP<sub>M</sub>, we
further remove the 10% Same subwords prediction, essentially predicting only the masked subwords.
Prior work has proven it is effective to combine
Entity Prediction with MLM for cross-lingual transfer ([Jiang et al., 2020](https://aclanthology.org/2020.emnlp-main.479/)), therefore we investigate the
combination of the Entity Prediction objectives together with MLM on non-entity subwords. Specifically, when combined with MLM, we lower the
entity masking probability (p) to 50% to roughly keep the same overall masking percentage.
This results into the following objectives: WEP + MLM, PEP<sub>MRS</sub> + MLM, PEP<sub>MS</sub> + MLM, PEP<sub>M</sub> + MLM
This model was trained with the **PEP<sub>MS</sub> + MLM** objective on the EntityCS corpus with 39 languages.
## Model Details
### Training Details
We start from the [XLM-R-base](https://huggingface.co/xlm-roberta-base) model and train for 1 epoch on 8 Nvidia V100 32GB GPUs.
We set batch size to 16 and gradient accumulation steps to 2, resulting in an effective batch size of 256.
For speedup we use fp16 mixed precision.
We use the sampling strategy proposed by [Conneau and Lample (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf), where high resource languages are down-sampled and low
resource languages get sampled more frequently.
We only train the embedding and the last two layers of the model.
We randomly choose 100 sentences from each language to serve as a validation set, on which we measure the perplexity every 10K training steps.
**This checkpoint corresponds to the one with the lower perplexity on the validation set.**
## Usage
The current model can be used for further fine-tuning on downstream tasks.
In the paper, we focused on entity-related tasks, such as NER, Word Sense Disambiguation and Slot Filling.
Alternatively, it can be used directly (no fine-tuning) for probing tasks, i.e. predict missing words, such as [X-FACTR](https://aclanthology.org/2020.emnlp-main.479/).
## How to Get Started with the Model
Use the code below to get started with the model: https://github.com/huawei-noah/noah-research/tree/master/NLP/EntityCS
## Citation
**BibTeX:**
```html
@inproceedings{whitehouse-etal-2022-entitycs,
title = "{E}ntity{CS}: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching",
author = "Whitehouse, Chenxi and
Christopoulou, Fenia and
Iacobacci, Ignacio",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.499",
pages = "6698--6714"
}
```
## Model Card Contact
[Fenia Christopoulou](mailto:[email protected])
|
huawei-noah/EntityCS-39-PEP_MS-xlmr-base | huawei-noah | 2023-09-13T15:19:33Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"af",
"ar",
"bg",
"bn",
"de",
"el",
"en",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"he",
"hi",
"hu",
"id",
"it",
"ja",
"jv",
"ka",
"kk",
"ko",
"ml",
"mr",
"ms",
"my",
"nl",
"pt",
"ru",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ur",
"vi",
"yo",
"zh",
"arxiv:1904.09223",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-07T09:40:03Z | ---
license: apache-2.0
language:
- af
- ar
- bg
- bn
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fr
- he
- hi
- hu
- id
- it
- ja
- jv
- ka
- kk
- ko
- ml
- mr
- ms
- my
- nl
- pt
- ru
- sw
- ta
- te
- th
- tl
- tr
- ur
- vi
- yo
- zh
---
# Model Card for EntityCS-39-PEP_MS-xlmr-base
This model has been trained on the EntityCS corpus, an English corpus from Wikipedia with replaced entities in different languages.
The corpus can be found in [https://huggingface.co/huawei-noah/entity_cs](https://huggingface.co/huawei-noah/entity_cs), check the link for more details.
Firstly, we employ the conventional 80-10-10 MLM objective, where 15% of sentence subwords are considered as masking candidates. From those, we replace subwords
with [MASK] 80% of the time, with Random subwords (from the entire vocabulary) 10% of the time, and leave the remaining 10% unchanged (Same).
To integrate entity-level cross-lingual knowledge into the model, we propose Entity Prediction objectives, where we only mask subwords belonging
to an entity. By predicting the masked entities in ENTITYCS sentences, we expect the model to capture the semantics of the same entity in different
languages.
Two different masking strategies are proposed for predicting entities: Whole Entity Prediction (`WEP`) and Partial Entity Prediction (`PEP`).
In WEP, motivated by [Sun et al. (2019)](https://arxiv.org/abs/1904.09223) where whole word masking is also adopted, we consider all the words (and consequently subwords) inside
an entity as masking candidates. Then, 80% of the time we mask every subword inside an entity, and
20% of the time we keep the subwords intact. Note that, as our goal is to predict the entire masked
entity, we do not allow replacing with Random subwords, since it can introduce noise and result
in the model predicting incorrect entities. After entities are masked, we remove the entity indicators
`<e>`, `</e>` from the sentences before feeding them to the model.
For PEP, we also consider all entities as masking candidates. In contrast to WEP, we do not force
subwords belonging to one entity to be either all masked or all unmasked. Instead, each individual
entity subword is masked 80% of the time. For the remaining 20% of the masking candidates, we experiment with three different replacements. First,
PEP<sub>MRS</sub>, corresponds to the conventional 80-10-10 masking strategy, where 10% of the remaining
subwords are replaced with Random subwords and the other 10% are kept unchanged. In the second
setting, PEP<sub>MS</sub>, we remove the 10% Random subwords substitution, i.e. we predict the 80% masked
subwords and 10% Same subwords from the masking candidates. In the third setting, PEP<sub>M</sub>, we
further remove the 10% Same subwords prediction, essentially predicting only the masked subwords.
Prior work has proven it is effective to combine
Entity Prediction with MLM for cross-lingual transfer ([Jiang et al., 2020](https://aclanthology.org/2020.emnlp-main.479/)), therefore we investigate the
combination of the Entity Prediction objectives together with MLM on non-entity subwords. Specifically, when combined with MLM, we lower the
entity masking probability (p) to 50% to roughly keep the same overall masking percentage.
This results into the following objectives: WEP + MLM, PEP<sub>MRS</sub> + MLM, PEP<sub>MS</sub> + MLM, PEP<sub>M</sub> + MLM
This model was trained with the **PEP<sub>MS</sub>** objective on the EntityCS corpus with 39 languages.
## Model Details
### Training Details
We start from the [XLM-R-base](https://huggingface.co/xlm-roberta-base) model and train for 1 epoch on 8 Nvidia V100 32GB GPUs.
We set batch size to 16 and gradient accumulation steps to 2, resulting in an effective batch size of 256.
For speedup we use fp16 mixed precision.
We use the sampling strategy proposed by [Conneau and Lample (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf), where high resource languages are down-sampled and low
resource languages get sampled more frequently.
We only train the embedding and the last two layers of the model.
We randomly choose 100 sentences from each language to serve as a validation set, on which we measure the perplexity every 10K training steps.
**This checkpoint corresponds to the one with the lower perplexity on the validation set.**
## Usage
The current model can be used for further fine-tuning on downstream tasks.
In the paper, we focused on entity-related tasks, such as NER, Word Sense Disambiguation and Slot Filling.
Alternatively, it can be used directly (no fine-tuning) for probing tasks, i.e. predict missing words, such as [X-FACTR](https://aclanthology.org/2020.emnlp-main.479/).
## How to Get Started with the Model
Use the code below to get started with the model: https://github.com/huawei-noah/noah-research/tree/master/NLP/EntityCS
## Citation
**BibTeX:**
```html
@inproceedings{whitehouse-etal-2022-entitycs,
title = "{E}ntity{CS}: Improving Zero-Shot Cross-lingual Transfer with Entity-Centric Code Switching",
author = "Whitehouse, Chenxi and
Christopoulou, Fenia and
Iacobacci, Ignacio",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.499",
pages = "6698--6714"
}
```
## Model Card Contact
[Fenia Christopoulou](mailto:[email protected]) |
DakshBeniwal/test_model | DakshBeniwal | 2023-09-13T15:12:28Z | 0 | 0 | null | [
"arxiv:1910.09700",
"license:unknown",
"region:us"
]
| null | 2023-09-13T15:08:09Z | ---
license: unknown
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CyberHarem/shirona_pokemon | CyberHarem | 2023-09-13T15:07:23Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/shirona_pokemon",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-13T14:51:57Z | ---
license: mit
datasets:
- CyberHarem/shirona_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of shirona_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6500, you need to download `6500/shirona_pokemon.pt` as the embedding and `6500/shirona_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6500**, with the score of 0.893. The trigger words are:
1. `shirona_pokemon`
2. `blonde_hair, long_hair, hair_over_one_eye, hair_ornament, breasts, grey_eyes, smile, very_long_hair, cleavage, closed_mouth, large_breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7500 | 0.830 | [Download](7500/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](7500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) |  |  |
| 7000 | 0.879 | [Download](7000/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](7000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) |  |  |
| **6500** | **0.893** | [**Download**](6500/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](6500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) |  |  |
| 6000 | 0.798 | [Download](6000/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5500 | 0.820 | [Download](5500/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](5500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) |  |  |
| 5000 | 0.844 | [Download](5000/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](5000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) |  |  |
| 4500 | 0.844 | [Download](4500/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](4500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) |  |  |
| 4000 | 0.803 | [Download](4000/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| 3500 | 0.721 | [Download](3500/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](3500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) |  |  |
| 3000 | 0.821 | [Download](3000/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2500 | 0.732 | [Download](2500/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](2500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) |  |  |
| 2000 | 0.714 | [Download](2000/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1500 | 0.750 | [Download](1500/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](1500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) |  |  |
| 1000 | 0.700 | [Download](1000/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](1000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) |  |  |
| 500 | 0.737 | [Download](500/shirona_pokemon.zip) |  |  |  |  |  | [<NSFW, click to see>](500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) |  |  |
|
pmarar96/sd-class-butterflies-32 | pmarar96 | 2023-09-13T14:59:42Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2023-09-13T14:59:36Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('pmarar96/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
michaelfeil/ct2fast-CodeLlama-7b-hf | michaelfeil | 2023-09-13T14:58:27Z | 7 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"ctranslate2",
"int8",
"float16",
"llama-2",
"code",
"arxiv:2308.12950",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-11T07:39:12Z | ---
language:
- code
pipeline_tag: text-generation
tags:
- ctranslate2
- int8
- float16
- llama-2
license: llama2
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-CodeLlama-7b-hf"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-09-13 using
```
LLama-2 -> removed <pad> token.
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 7B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) |
| 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) |
## Model Use
To use this model, please make sure to install transformers from `main` until the next version is released:
```bash
pip install git+https://github.com/huggingface/transformers.git@main accelerate
```
Model capabilities:
- [x] Code completion.
- [x] Infilling.
- [ ] Instructions / chat.
- [ ] Python specialist.
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "codellama/CodeLlama-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'import socket\n\ndef ping_exponential_backoff(host: str):',
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the base model of 7B parameters.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or it's [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide). |
Subsets and Splits