modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 00:43:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 00:40:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
CyberHarem/itsuwa_toarumajutsunoindex | CyberHarem | 2023-09-16T06:07:04Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/itsuwa_toarumajutsunoindex",
"license:mit",
"region:us"
]
| text-to-image | 2023-08-15T21:57:11Z | ---
license: mit
datasets:
- CyberHarem/itsuwa_toarumajutsunoindex
pipeline_tag: text-to-image
tags:
- art
---
# Lora of itsuwa_toarumajutsunoindex
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5760, you need to download `5760/itsuwa_toarumajutsunoindex.pt` as the embedding and `5760/itsuwa_toarumajutsunoindex.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5760**, with the score of 0.930. The trigger words are:
1. `itsuwa_toarumajutsunoindex`
2. `brown_eyes, brown_hair, short_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7200 | 0.917 | [Download](7200/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7200/previews/pattern_16.png) | [<NSFW, click to see>](7200/previews/bikini.png) | [<NSFW, click to see>](7200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| 6720 | 0.905 | [Download](6720/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6720/previews/pattern_16.png) | [<NSFW, click to see>](6720/previews/bikini.png) | [<NSFW, click to see>](6720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) |  |  |
| 6240 | 0.914 | [Download](6240/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/pattern_16.png) | [<NSFW, click to see>](6240/previews/bikini.png) | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| **5760** | **0.930** | [**Download**](5760/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/pattern_16.png) | [<NSFW, click to see>](5760/previews/bikini.png) | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5280 | 0.912 | [Download](5280/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/pattern_16.png) | [<NSFW, click to see>](5280/previews/bikini.png) | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4800 | 0.829 | [Download](4800/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/pattern_16.png) | [<NSFW, click to see>](4800/previews/bikini.png) | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4320 | 0.868 | [Download](4320/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/pattern_16.png) | [<NSFW, click to see>](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3840 | 0.834 | [Download](3840/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/pattern_16.png) | [<NSFW, click to see>](3840/previews/bikini.png) | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3360 | 0.821 | [Download](3360/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3360/previews/pattern_16.png) | [<NSFW, click to see>](3360/previews/bikini.png) | [<NSFW, click to see>](3360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| 2880 | 0.685 | [Download](2880/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2880/previews/pattern_16.png) | [<NSFW, click to see>](2880/previews/bikini.png) | [<NSFW, click to see>](2880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2400 | 0.798 | [Download](2400/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/pattern_16.png) | [<NSFW, click to see>](2400/previews/bikini.png) | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1920 | 0.723 | [Download](1920/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/pattern_16.png) | [<NSFW, click to see>](1920/previews/bikini.png) | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1440 | 0.823 | [Download](1440/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1440/previews/pattern_16.png) | [<NSFW, click to see>](1440/previews/bikini.png) | [<NSFW, click to see>](1440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 960 | 0.544 | [Download](960/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](960/previews/pattern_16.png) | [<NSFW, click to see>](960/previews/bikini.png) | [<NSFW, click to see>](960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](960/previews/nude.png) | [<NSFW, click to see>](960/previews/nude2.png) |  |  |
| 480 | 0.322 | [Download](480/itsuwa_toarumajutsunoindex.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](480/previews/pattern_16.png) | [<NSFW, click to see>](480/previews/bikini.png) | [<NSFW, click to see>](480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](480/previews/nude.png) | [<NSFW, click to see>](480/previews/nude2.png) |  |  |
|
LuisChDev/LunarLander-v2-ppo | LuisChDev | 2023-09-16T06:04:45Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-16T06:04:20Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.31 +/- 20.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MouseTrap/StyleGen-Loopster-DL | MouseTrap | 2023-09-16T05:57:46Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:riffusion/riffusion-model-v1",
"base_model:adapter:riffusion/riffusion-model-v1",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-16T05:50:05Z |
---
license: creativeml-openrail-m
base_model: riffusion/riffusion-model-v1
instance_prompt: Loopster style
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - MouseTrap/StyleGen-Looper
These are LoRA adaption weights for riffusion/riffusion-model-v1. The weights were trained on Loopster style using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
|
mbarekat/ppo-SnowballTarget | mbarekat | 2023-09-16T05:51:35Z | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-09-16T05:51:31Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mbarekat/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
om-ashish-soni/pos-ner-tagging-v3 | om-ashish-soni | 2023-09-16T05:46:42Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:om-ashish-soni/pos-ner-tagging-v2",
"base_model:finetune:om-ashish-soni/pos-ner-tagging-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-16T05:35:20Z | ---
license: apache-2.0
base_model: om-ashish-soni/pos-ner-tagging-v2
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: pos-ner-tagging-v3
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9339443388845423
- name: Recall
type: recall
value: 0.9374228724000987
- name: F1
type: f1
value: 0.9356803726596793
- name: Accuracy
type: accuracy
value: 0.9272679107552835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pos-ner-tagging-v3
This model is a fine-tuned version of [om-ashish-soni/pos-ner-tagging-v2](https://huggingface.co/om-ashish-soni/pos-ner-tagging-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6356
- Precision: 0.9339
- Recall: 0.9374
- F1: 0.9357
- Accuracy: 0.9273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 439 | 0.6415 | 0.9341 | 0.9367 | 0.9354 | 0.9265 |
| 0.0078 | 2.0 | 878 | 0.6372 | 0.9327 | 0.9363 | 0.9345 | 0.9259 |
| 0.006 | 3.0 | 1317 | 0.6283 | 0.9338 | 0.9373 | 0.9356 | 0.9274 |
| 0.0036 | 4.0 | 1756 | 0.6356 | 0.9339 | 0.9374 | 0.9357 | 0.9273 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
om-ashish-soni/pos-ner-tagging-v2 | om-ashish-soni | 2023-09-16T05:25:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:om-ashish-soni/pos-ner-tagging-v2",
"base_model:finetune:om-ashish-soni/pos-ner-tagging-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-16T04:22:26Z | ---
license: apache-2.0
base_model: om-ashish-soni/pos-ner-tagging-v2
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: pos-ner-tagging-v2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9393653920267203
- name: Recall
type: recall
value: 0.9408358887483113
- name: F1
type: f1
value: 0.9401000653531749
- name: Accuracy
type: accuracy
value: 0.9270324365691411
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pos-ner-tagging-v2
This model is a fine-tuned version of [om-ashish-soni/pos-ner-tagging-v2](https://huggingface.co/om-ashish-soni/pos-ner-tagging-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6442
- Precision: 0.9394
- Recall: 0.9408
- F1: 0.9401
- Accuracy: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3297 | 1.0 | 1756 | 0.4190 | 0.9189 | 0.9231 | 0.9210 | 0.9051 |
| 0.2521 | 2.0 | 3512 | 0.3836 | 0.9210 | 0.9300 | 0.9255 | 0.9114 |
| 0.1932 | 3.0 | 5268 | 0.4155 | 0.9295 | 0.9338 | 0.9316 | 0.9183 |
| 0.1325 | 4.0 | 7024 | 0.3969 | 0.9328 | 0.9356 | 0.9342 | 0.9211 |
| 0.0973 | 5.0 | 8780 | 0.4247 | 0.9332 | 0.9367 | 0.9349 | 0.9222 |
| 0.0799 | 6.0 | 10536 | 0.4606 | 0.9338 | 0.9374 | 0.9356 | 0.9229 |
| 0.0554 | 7.0 | 12292 | 0.4836 | 0.9333 | 0.9379 | 0.9356 | 0.9239 |
| 0.0415 | 8.0 | 14048 | 0.5271 | 0.9361 | 0.9391 | 0.9376 | 0.9245 |
| 0.0285 | 9.0 | 15804 | 0.5363 | 0.9366 | 0.9397 | 0.9381 | 0.9253 |
| 0.022 | 10.0 | 17560 | 0.5653 | 0.9377 | 0.9396 | 0.9387 | 0.9258 |
| 0.0146 | 11.0 | 19316 | 0.5962 | 0.9374 | 0.9400 | 0.9387 | 0.9259 |
| 0.0121 | 12.0 | 21072 | 0.6061 | 0.9385 | 0.9401 | 0.9393 | 0.9266 |
| 0.0085 | 13.0 | 22828 | 0.6263 | 0.9384 | 0.9403 | 0.9394 | 0.9261 |
| 0.0062 | 14.0 | 24584 | 0.6365 | 0.9381 | 0.9399 | 0.9390 | 0.9259 |
| 0.0053 | 15.0 | 26340 | 0.6386 | 0.9384 | 0.9402 | 0.9393 | 0.9264 |
| 0.0042 | 16.0 | 28096 | 0.6442 | 0.9394 | 0.9408 | 0.9401 | 0.9270 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
lrhegde/DiffusionModelImageFromText | lrhegde | 2023-09-16T05:14:54Z | 0 | 0 | null | [
"text-to-image",
"license:openrail",
"region:us"
]
| text-to-image | 2023-09-16T05:10:01Z | ---
license: openrail
metrics:
- accuracy
pipeline_tag: text-to-image
--- |
codecompletedeployment/st-codesearch-distilroberta-base | codecompletedeployment | 2023-09-16T05:14:13Z | 2 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"dataset:code_search_net",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-09-15T20:48:43Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
datasets:
- code_search_net
---
# flax-sentence-embeddings/st-codesearch-distilroberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It was trained on the [code_search_net](https://huggingface.co/datasets/code_search_net) dataset and can be used to search program code given text.
## Usage:
```python
from sentence_transformers import SentenceTransformer, util
#This list the defines the different programm codes
code = ["""def sort_list(x):
return sorted(x)""",
"""def count_above_threshold(elements, threshold=0):
counter = 0
for e in elements:
if e > threshold:
counter += 1
return counter""",
"""def find_min_max(elements):
min_ele = 99999
max_ele = -99999
for e in elements:
if e < min_ele:
min_ele = e
if e > max_ele:
max_ele = e
return min_ele, max_ele"""]
model = SentenceTransformer("flax-sentence-embeddings/st-codesearch-distilroberta-base")
# Encode our code into the vector space
code_emb = model.encode(code, convert_to_tensor=True)
# Interactive demo: Enter queries, and the method returns the best function from the
# 3 functions we defined
while True:
query = input("Query: ")
query_emb = model.encode(query, convert_to_tensor=True)
hits = util.semantic_search(query_emb, code_emb)[0]
top_hit = hits[0]
print("Cossim: {:.2f}".format(top_hit['score']))
print(code[top_hit['corpus_id']])
print("\n\n")
```
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('flax-sentence-embeddings/st-codesearch-distilroberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Training
The model was trained with a DistilRoBERTa-base model for 10k training steps on the codesearch dataset with batch_size 256 and MultipleNegativesRankingLoss.
It is some preliminary model. It was neither tested nor was the trained quite sophisticated
The model was trained with the parameters:
**DataLoader**:
`MultiDatasetDataLoader.MultiDatasetDataLoader` of length 5371 with parameters:
```
{'batch_size': 256}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20, 'similarity_fct': 'dot_score'}
```
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "warmupconstant",
"steps_per_epoch": 10000,
"warmup_steps": 500,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
dislikename/sd-class-butterflies-32 | dislikename | 2023-09-16T04:47:58Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2023-09-16T04:47:52Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('dislikename/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
nightdude/config_8113572 | nightdude | 2023-09-16T04:35:17Z | 2 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-16T04:33:11Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
raulangelj/huggingface_sentiment_analysis | raulangelj | 2023-09-16T03:21:26Z | 28 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-05-26T03:04:04Z | ---
base_model: dccuchile/bert-base-spanish-wwm-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: huggingface_sentiment_analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# huggingface_sentiment_analysis
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6537
- Accuracy: 0.61
- F1: 0.6609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 25
- eval_batch_size: 25
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
LuisCarlosJP/Reinforce-CartPole-v1 | LuisCarlosJP | 2023-09-16T03:05:03Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-16T03:04:52Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nikhilwani/casual_llm_updated | nikhilwani | 2023-09-16T02:44:25Z | 147 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-16T02:30:39Z | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: casual_llm_updated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# casual_llm_updated
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7194 | 1.0 | 1133 | 3.7342 |
| 3.6485 | 2.0 | 2266 | 3.7292 |
| 3.6234 | 3.0 | 3399 | 3.7268 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Doctor-Shotgun/CalliopeDS-L2-13B | Doctor-Shotgun | 2023-09-16T02:30:16Z | 1,849 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"en",
"arxiv:2306.01708",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-16T01:11:49Z | ---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- llama-2
license: agpl-3.0
---
# Model Card: CalliopeDS-L2-13B
This is a Llama 2-based model consisting of a merge of several models using a weight-adjusted TIES merge ([Resolving Interference When Merging Models](https://arxiv.org/abs/2306.01708)):
- [jondurbin/airoboros-l2-13b-2.2](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2)
- [elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2)
- [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
- [lemonilia/limarp-llama2-v2](https://huggingface.co/lemonilia/limarp-llama2-v2)
- [PygmalionAI/pygmalion-2-13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
Charles Goddard's [mergekit](https://github.com/cg123/mergekit) repo was used to perform these operations.
The purpose of this merge was to create a model that excels at creative writing and roleplay while maintaining general intelligence and instruction-following capabilities. In testing, it has shown to be capable at producing descriptive and verbose responses while demonstrating a solid understanding of the context.
## Usage:
Due to this being a merge of multiple models, different prompt formats may work, but you can try the Alpaca instruction format of the LIMARP v2:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. Character should respond with messages of medium length.
### Input:
User: {utterance}
### Response:
Character: {utterance}
```
Or the Pygmalion/Metharme format:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
<|user|>Hello!<|model|>{model's response goes here}
```
The model was also tested using a system prompt with no instruction sequences:
```
Write Character's next reply in the roleplay between User and Character. Stay in character and write creative responses that move the scenario forward. Narrate in detail, using elaborate descriptions. The following is your persona:
{{persona}}
[Current conversation]
User: {utterance}
Character: {utterance}
```
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details. |
stablediffusionapi/copax-timelessxl-sdxl10 | stablediffusionapi | 2023-09-16T02:26:22Z | 820 | 5 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2023-09-16T02:15:36Z | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# copax-timelessxl-sdxl10 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "copax-timelessxl-sdxl10"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/copax-timelessxl-sdxl10)
Model link: [View model](https://stablediffusionapi.com/models/copax-timelessxl-sdxl10)
Credits: [View credits](https://civitai.com/?query=copax-timelessxl-sdxl10)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "copax-timelessxl-sdxl10",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
jennyc/ip_rating | jennyc | 2023-09-16T02:12:16Z | 127 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-11T00:12:07Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: ip_rating
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ip_rating
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Reham721/Subjective_QG | Reham721 | 2023-09-16T02:03:36Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"ar",
"dataset:squad",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-15T20:17:26Z | ---
datasets:
- squad
language:
- ar
pipeline_tag: text2text-generation
--- |
lyogavin/Anima-7B-100K | lyogavin | 2023-09-16T01:59:42Z | 1,537 | 31 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"100k",
"7b",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-14T14:47:16Z | ---
license: apache-2.0
language:
- en
tags:
- llama2
- 100k
- 7b
---
Anima LLM supporting 100K input token length. It's trained based on Llama2 7B, so the license support commercial use!
We carefully curated long QA training dataset from 30k to 100k length to train this model. We also made a lot of memory optimizations to make it scale to 100k tokens.
## How to train/infer?
#### install dependencies
```bash
# Please update the path of `CUDA_HOME`
export CUDA_HOME=/usr/local/cuda-11.8
pip install transformers==4.31.0
pip install sentencepiece
pip install ninja
pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/xentropy
pip install evaluate
pip install git+https://github.com/huggingface/[email protected]
pip install wandb
```
#### inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
base_model = "lyogavin/Anima-7B-100K"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.float16,
trust_remote_code=True,
device_map="auto",
)
model.eval()
prompt = "Where is the capital of US?"
inputs = tokenizer(prompt, return_tensors="pt")
inputs['input_ids'] = inputs['input_ids'].cuda()
inputs['attention_mask'] = inputs['attention_mask'].cuda()
# Generate
generate_ids = model.generate(**inputs, max_new_tokens=30,
only_last_logit=True, # to save memory
use_cache=False, # when run into OOM, enable this can save memory
xentropy=True)
output = tokenizer.batch_decode(generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False)[0]
```
#### Training
```bash
./run_longer_training.sh
```
## Evaluations
There's almost none evaluation dataset designed for 100k tokens. So we designed/curated some dataset for this model. We compared this model and several other public/private models.
#### 1. longchat topic retrieval
| Model | Accuracy |
|-------------------|---------|
| Claude2 | 0.9 |
| together llama2 32k | 0.15 |
| longchat 32k 1.5 | 0.05 |
| Anima 100K | 0.5 |
#### 2. longchat number retrieval
| Model | Accuracy |
|-------------------|---------|
| Claude2 | 0.85 |
| together llama2 32k | 0.2 |
| longchat 32k 1.5 | 0.05 |
| Anima 100K | 0.45 |
#### 3. Narrative QA in zeroscore
| Model | F1 |
|-------------------|---------|
| Claude2 | 0.6187 |
| together llama2 32k | 0.3833 |
| longchat 32k 1.5 | 0.2416 |
| Anima 100K | 0.4919 |
## Github
Github repo is [here](https://github.com/lyogavin/Anima/tree/main/anima_100k) |
JoseVallar01/prueba13 | JoseVallar01 | 2023-09-16T01:41:04Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
]
| null | 2023-09-13T21:43:52Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jncraton/phi-1_5-ct2-int8 | jncraton | 2023-09-16T01:37:13Z | 3 | 0 | transformers | [
"transformers",
"text-generation",
"en",
"arxiv:2309.05463",
"license:other",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-16T01:35:03Z | ---
license: other
language:
- en
pipeline_tag: text-generation
---
## Model Summary
The language model phi-1.5 is a Transformer with **1.3 billion** parameters. It was trained using the same data sources as [phi-1](https://huggingface.co/microsoft/phi-1), augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
We **did not** fine-tune phi-1.5 either for **instruction following or through reinforcement learning from human feedback**. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.
phi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.
## Intended Uses
Given the nature of the training data, phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.
#### QA format:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
Answer: Mathematics is like a lighthouse, guiding us through the vast ocean of numbers and calculations. Just as a lighthouse illuminates the darkness, mathematics provides us with a clear path to navigate through complex problems. It helps us make sense of the world around us, just like a lighthouse helps ships find their way home.
```
where the model generates the text after "Answer:".
#### Chat format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Have you tried using a timer? It can help you stay on track and avoid distractions.
Alice: That's a good idea. I'll give it a try.
Charlie: Another thing that can help is to break up your study sessions into smaller chunks. It's easier to concentrate on one thing at a time.
Alice: That makes sense. I'll try that too.
Bob: And don't forget to take breaks! It's important to give your brain a rest so you can come back to your studies with a fresh perspective.
Alice: Thanks for the advice, guys. I feel more motivated now.
Charlie: No problem, Alice. We're all in this together.
Bob: Yeah, and remember that it's okay to ask for help if you need it. We're here to support each other.
```
where the model generates the text after the first "Bob:".
#### Code format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes**
* phi-1.5 is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks is out of the scope of this research project. As a result, phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
## Limitations of phi-1.5
* Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Dataset size: 30B tokens
* Training tokens: 150B tokens
* Precision: fp16
* GPUs: 32xA100-40G
* Training time: 8 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [flash-attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [Research License](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx).
### Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device('cuda')
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, torch_dtype="auto")
inputs = tokenizer('''```python
def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
**Remark.** In the generation function, our model currently does not support beam search (`num_beams` >1) and `attention_mask' parameters.
Furthermore, in the forward pass of the model, we currently do not support outputting hidden states or attention values, or using custom input embeddings (instead of the model's).
### Citation
You can find the paper at https://arxiv.org/abs/2309.05463
```bib
@article{textbooks2,
title={Textbooks Are All You Need II: \textbf{phi-1.5} technical report},
author={Li, Yuanzhi and Bubeck, S{\'e}bastien and Eldan, Ronen and Del Giorno, Allie and Gunasekar, Suriya and Lee, Yin Tat},
journal={arXiv preprint arXiv:2309.05463},
year={2023}
}
``` |
stablediffusionapi/absolute-reality-v1.8.1 | stablediffusionapi | 2023-09-16T01:25:46Z | 45 | 3 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-16T01:22:05Z | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# absolute-reality-v1.8.1 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "absolute-reality-v1.8.1"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/absolute-reality-v1.8.1)
Model link: [View model](https://stablediffusionapi.com/models/absolute-reality-v1.8.1)
Credits: [View credits](https://civitai.com/?query=absolute-reality-v1.8.1)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "absolute-reality-v1.8.1",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
stablediffusionapi/indigo-furry-mix-v65 | stablediffusionapi | 2023-09-16T01:05:28Z | 54 | 0 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-16T00:29:41Z | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# indigo-furry-mix-v65 API Inference
,%20standing,%20solo,%20muscle,%20detailed%20scale%20texture,%20old%20castle,%20(battlefield),%20(tribal%20cloth.jpeg)
## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "indigo-furry-mix-v65"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/indigo-furry-mix-v65)
Model link: [View model](https://stablediffusionapi.com/models/indigo-furry-mix-v65)
Credits: [View credits](https://civitai.com/?query=indigo-furry-mix-v65)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "indigo-furry-mix-v65",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Chanblock/llama-2-7b-langchain-chat-1000_dataset | Chanblock | 2023-09-16T00:27:15Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"license:llama2",
"region:us"
]
| null | 2023-09-15T23:59:23Z | ---
license: llama2
base_model: Photolens/llama-2-7b-langchain-chat
tags:
- generated_from_trainer
model-index:
- name: llama-2-7b-langchain-chat-1000_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-2-7b-langchain-chat-1000_dataset
This model is a fine-tuned version of [Photolens/llama-2-7b-langchain-chat](https://huggingface.co/Photolens/llama-2-7b-langchain-chat) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
fembuoy/Redshell | fembuoy | 2023-09-15T23:56:28Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2023-09-14T15:09:11Z | ---
license: openrail
---
World-famous vtuber and recovering overwatch addict Redshell

Model Info:
- Training: RVC v1 Harvest 600 epochs with 8 minutes of audio
- Recommended search feature rate 0.6-0.8
- For crepe recommended hop length 24 or lower
- NOTE: has issues with higher pitch input, might be due to the dataset
Voice Sample: https://vocaroo.com/19sZuBJsynLw |
manahil1/my_awesome_opus_books_model | manahil1 | 2023-09-15T23:45:32Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-15T23:16:21Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7142
- Bleu: 0.1327
- Gen Len: 11.4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 1 | 10.1215 | 0.0 | 19.0 |
| No log | 2.0 | 2 | 10.1215 | 0.0 | 19.0 |
| No log | 3.0 | 3 | 10.1215 | 0.0 | 19.0 |
| No log | 4.0 | 4 | 9.9493 | 0.0 | 19.0 |
| No log | 5.0 | 5 | 9.7067 | 0.0 | 19.0 |
| No log | 6.0 | 6 | 9.5209 | 0.0 | 19.0 |
| No log | 7.0 | 7 | 9.1640 | 0.0 | 19.0 |
| No log | 8.0 | 8 | 9.1640 | 0.0 | 19.0 |
| No log | 9.0 | 9 | 8.9257 | 0.0 | 19.0 |
| No log | 10.0 | 10 | 8.7095 | 0.0 | 19.0 |
| No log | 11.0 | 11 | 8.0234 | 0.0 | 19.0 |
| No log | 12.0 | 12 | 7.6148 | 0.0 | 19.0 |
| No log | 13.0 | 13 | 7.6148 | 0.0 | 19.0 |
| No log | 14.0 | 14 | 7.3894 | 0.0 | 19.0 |
| No log | 15.0 | 15 | 7.1168 | 0.0 | 19.0 |
| No log | 16.0 | 16 | 6.9173 | 0.0 | 19.0 |
| No log | 17.0 | 17 | 6.7148 | 0.0 | 19.0 |
| No log | 18.0 | 18 | 6.3630 | 0.0 | 19.0 |
| No log | 19.0 | 19 | 6.0068 | 0.0 | 19.0 |
| No log | 20.0 | 20 | 5.8264 | 0.0 | 19.0 |
| No log | 21.0 | 21 | 5.6897 | 0.0 | 19.0 |
| No log | 22.0 | 22 | 5.5416 | 0.0 | 19.0 |
| No log | 23.0 | 23 | 5.4310 | 0.0 | 19.0 |
| No log | 24.0 | 24 | 5.3268 | 0.6787 | 19.0 |
| No log | 25.0 | 25 | 5.2214 | 2.6287 | 19.0 |
| No log | 26.0 | 26 | 5.0786 | 2.6287 | 19.0 |
| No log | 27.0 | 27 | 4.9850 | 3.2603 | 19.0 |
| No log | 28.0 | 28 | 4.9030 | 3.6542 | 19.0 |
| No log | 29.0 | 29 | 4.8184 | 3.6542 | 19.0 |
| No log | 30.0 | 30 | 4.7408 | 3.6542 | 19.0 |
| No log | 31.0 | 31 | 4.6692 | 3.6542 | 19.0 |
| No log | 32.0 | 32 | 4.5869 | 3.6542 | 19.0 |
| No log | 33.0 | 33 | 4.4861 | 3.6542 | 19.0 |
| No log | 34.0 | 34 | 4.3921 | 3.6542 | 19.0 |
| No log | 35.0 | 35 | 4.3102 | 3.6542 | 19.0 |
| No log | 36.0 | 36 | 4.2375 | 3.6542 | 19.0 |
| No log | 37.0 | 37 | 4.1691 | 3.6542 | 19.0 |
| No log | 38.0 | 38 | 4.1019 | 3.6542 | 19.0 |
| No log | 39.0 | 39 | 4.0349 | 3.6542 | 19.0 |
| No log | 40.0 | 40 | 3.9652 | 3.6542 | 19.0 |
| No log | 41.0 | 41 | 3.8937 | 3.6542 | 19.0 |
| No log | 42.0 | 42 | 3.8232 | 3.6542 | 19.0 |
| No log | 43.0 | 43 | 3.7526 | 3.6542 | 19.0 |
| No log | 44.0 | 44 | 3.6845 | 3.6542 | 19.0 |
| No log | 45.0 | 45 | 3.6196 | 3.6542 | 19.0 |
| No log | 46.0 | 46 | 3.5549 | 3.6542 | 19.0 |
| No log | 47.0 | 47 | 3.4897 | 3.6542 | 19.0 |
| No log | 48.0 | 48 | 3.4227 | 3.6542 | 19.0 |
| No log | 49.0 | 49 | 3.3559 | 3.6542 | 19.0 |
| No log | 50.0 | 50 | 3.2901 | 3.6542 | 19.0 |
| No log | 51.0 | 51 | 3.2237 | 3.6542 | 19.0 |
| No log | 52.0 | 52 | 3.1568 | 3.6542 | 19.0 |
| No log | 53.0 | 53 | 3.0880 | 3.6542 | 19.0 |
| No log | 54.0 | 54 | 3.0184 | 3.6542 | 19.0 |
| No log | 55.0 | 55 | 2.9428 | 3.6542 | 19.0 |
| No log | 56.0 | 56 | 2.8787 | 3.6542 | 19.0 |
| No log | 57.0 | 57 | 2.8177 | 3.6542 | 19.0 |
| No log | 58.0 | 58 | 2.7606 | 3.6542 | 19.0 |
| No log | 59.0 | 59 | 2.7053 | 3.6542 | 19.0 |
| No log | 60.0 | 60 | 2.6458 | 3.6542 | 19.0 |
| No log | 61.0 | 61 | 2.5915 | 3.6542 | 19.0 |
| No log | 62.0 | 62 | 2.5416 | 3.6542 | 19.0 |
| No log | 63.0 | 63 | 2.4929 | 3.6542 | 19.0 |
| No log | 64.0 | 64 | 2.4465 | 3.6542 | 19.0 |
| No log | 65.0 | 65 | 2.4007 | 3.6542 | 19.0 |
| No log | 66.0 | 66 | 2.3560 | 3.6542 | 19.0 |
| No log | 67.0 | 67 | 2.3136 | 3.6542 | 19.0 |
| No log | 68.0 | 68 | 2.2712 | 3.6542 | 19.0 |
| No log | 69.0 | 69 | 2.2313 | 3.6542 | 19.0 |
| No log | 70.0 | 70 | 2.1924 | 3.6542 | 19.0 |
| No log | 71.0 | 71 | 2.1563 | 3.6542 | 19.0 |
| No log | 72.0 | 72 | 2.1213 | 3.6542 | 19.0 |
| No log | 73.0 | 73 | 2.0885 | 3.6542 | 19.0 |
| No log | 74.0 | 74 | 2.0577 | 3.6542 | 19.0 |
| No log | 75.0 | 75 | 2.0293 | 3.6542 | 19.0 |
| No log | 76.0 | 76 | 2.0023 | 3.6542 | 19.0 |
| No log | 77.0 | 77 | 1.9762 | 3.6542 | 19.0 |
| No log | 78.0 | 78 | 1.9514 | 3.6542 | 19.0 |
| No log | 79.0 | 79 | 1.9288 | 3.6542 | 19.0 |
| No log | 80.0 | 80 | 1.9076 | 3.6542 | 19.0 |
| No log | 81.0 | 81 | 1.8876 | 3.6542 | 19.0 |
| No log | 82.0 | 82 | 1.8691 | 3.6542 | 19.0 |
| No log | 83.0 | 83 | 1.8520 | 3.6542 | 19.0 |
| No log | 84.0 | 84 | 1.8362 | 3.6542 | 19.0 |
| No log | 85.0 | 85 | 1.8217 | 1.2446 | 15.2 |
| No log | 86.0 | 86 | 1.8080 | 1.2446 | 15.2 |
| No log | 87.0 | 87 | 1.7957 | 0.1327 | 11.4 |
| No log | 88.0 | 88 | 1.7846 | 0.1327 | 11.4 |
| No log | 89.0 | 89 | 1.7743 | 0.1327 | 11.4 |
| No log | 90.0 | 90 | 1.7651 | 0.1327 | 11.4 |
| No log | 91.0 | 91 | 1.7569 | 0.1327 | 11.4 |
| No log | 92.0 | 92 | 1.7493 | 0.1327 | 11.4 |
| No log | 93.0 | 93 | 1.7426 | 0.1327 | 11.4 |
| No log | 94.0 | 94 | 1.7367 | 0.1327 | 11.4 |
| No log | 95.0 | 95 | 1.7320 | 0.1327 | 11.4 |
| No log | 96.0 | 96 | 1.7273 | 0.1327 | 11.4 |
| No log | 97.0 | 97 | 1.7235 | 0.1327 | 11.4 |
| No log | 98.0 | 98 | 1.7200 | 0.1327 | 11.4 |
| No log | 99.0 | 99 | 1.7170 | 0.1327 | 11.4 |
| No log | 100.0 | 100 | 1.7142 | 0.1327 | 11.4 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
shantanudave/autotrain-adv-15sept | shantanudave | 2023-09-15T23:26:20Z | 1 | 2 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-15T23:26:18Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sdaveshantanu
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
platzi/platzi-distilroberta-base-mrpc-glue-alejandro-arroyo | platzi | 2023-09-15T23:26:13Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-15T23:15:45Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-alejandro-arroyo
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8357843137254902
- name: F1
type: f1
value: 0.8866328257191202
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-alejandro-arroyo
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9465
- Accuracy: 0.8358
- F1: 0.8866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4531 | 1.09 | 500 | 0.5192 | 0.8064 | 0.8636 |
| 0.2895 | 2.18 | 1000 | 1.0305 | 0.8186 | 0.8729 |
| 0.166 | 3.27 | 1500 | 0.9465 | 0.8358 | 0.8866 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
AlienKevin/whisper-base-jyutping-without-tones-full-zh-HK | AlienKevin | 2023-09-15T23:25:11Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"yue",
"base_model:AlienKevin/whisper-base-jyutping-without-tones-full",
"base_model:finetune:AlienKevin/whisper-base-jyutping-without-tones-full",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-15T22:49:05Z | ---
language:
- yue
license: apache-2.0
base_model: AlienKevin/whisper-base-jyutping-without-tones-full
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Base Jyutping without Tones Full Version trained with extra data from
Common Voice zh-HK
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Jyutping without Tones Full Version trained with extra data from Common Voice zh-HK
This model is a fine-tuned version of [AlienKevin/whisper-base-jyutping-without-tones-full](https://huggingface.co/AlienKevin/whisper-base-jyutping-without-tones-full) on the Common Voice 14.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0949
- Wer: 9.7694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- training_steps: 2400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0921 | 0.14 | 800 | 0.1049 | 10.4769 |
| 0.0824 | 0.28 | 1600 | 0.0989 | 9.8173 |
| 0.0611 | 0.42 | 2400 | 0.0949 | 9.7694 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
DriveMyScream/Pro_GAN_Image_Generator | DriveMyScream | 2023-09-15T23:24:43Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
]
| null | 2023-09-15T23:23:38Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
MarianaChapman/RuzeShoesReviews | MarianaChapman | 2023-09-15T23:11:18Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-09-15T23:07:55Z | ---
license: bsl-1.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- aa
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: image-to-text
tags:
- art
https://reviewsstate.com/ruze-shoes-reviews/- |
espnet/eason_chime4_asr2_e_branchformer12_conv1d1_raw_wavlm_large_21_km1k_bpe_rm2k_char_ts_sp | espnet | 2023-09-15T22:51:35Z | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:chime4",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
]
| automatic-speech-recognition | 2023-09-15T19:31:09Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- chime4
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/eason_chime4_asr2_e_branchformer12_conv1d1_raw_wavlm_large_21_km1k_bpe_rm2k_char_ts_sp`
This model was trained by yichenl5 using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 83e687f3b41310a000f4a5b65857734709752bf6
pip install -e .
cd egs2/chime4/asr2
./run.sh --skip_data_prep false --skip_train true --download_model espnet/eason_chime4_asr2_e_branchformer12_conv1d1_raw_wavlm_large_21_km1k_bpe_rm2k_char_ts_sp
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Sep 11 04:53:07 EDT 2023`
- python version: `3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0]`
- espnet version: `espnet 202308`
- pytorch version: `pytorch 1.13.1+cu117`
- Git hash: `83e687f3b41310a000f4a5b65857734709752bf6`
- Commit date: `Tue Aug 15 18:31:02 2023 -0400`
## exp/asr_train_discrete_asr_e_branchformer_e12_mlp1024_linear1024_macaron_lr1e-4_warmup25k_conv1d1_raw_wavlm_large_21_km1000_bpe_rm2000_char_ts_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dt05_real_beamformit_2mics|1640|27119|89.9|8.6|1.5|0.8|10.9|58.5|
|decode_asr_asr_model_valid.acc.ave/dt05_real_beamformit_5mics|1640|27119|92.2|6.6|1.2|0.6|8.4|54.3|
|decode_asr_asr_model_valid.acc.ave/dt05_real_isolated_1ch_track|1640|27119|89.0|9.6|1.4|0.9|11.9|63.5|
|decode_asr_asr_model_valid.acc.ave/dt05_simu_beamformit_2mics|1640|27120|89.8|8.4|1.7|0.7|10.8|61.8|
|decode_asr_asr_model_valid.acc.ave/dt05_simu_beamformit_5mics|1640|27120|93.3|5.7|1.0|0.4|7.1|54.9|
|decode_asr_asr_model_valid.acc.ave/dt05_simu_isolated_1ch_track|1640|27120|84.4|12.8|2.8|0.8|16.5|67.1|
|decode_asr_asr_model_valid.acc.ave/et05_real_beamformit_2mics|1320|21409|90.5|8.2|1.3|0.6|10.1|66.7|
|decode_asr_asr_model_valid.acc.ave/et05_real_beamformit_5mics|1320|21409|92.8|6.3|0.9|0.5|7.7|58.0|
|decode_asr_asr_model_valid.acc.ave/et05_real_isolated_1ch_track|1320|21409|88.3|10.0|1.7|0.8|12.5|71.5|
|decode_asr_asr_model_valid.acc.ave/et05_simu_beamformit_2mics|1320|21416|89.0|9.2|1.8|0.9|11.8|66.4|
|decode_asr_asr_model_valid.acc.ave/et05_simu_beamformit_5mics|1320|21416|92.7|6.4|0.9|0.7|8.0|61.0|
|decode_asr_asr_model_valid.acc.ave/et05_simu_isolated_1ch_track|1320|21416|83.3|13.2|3.5|1.1|17.8|69.4|
|decode_asr_asr_model_valid.acc.best/dt05_real_beamformit_2mics|1640|27119|89.5|9.2|1.3|0.9|11.4|60.2|
|decode_asr_asr_model_valid.acc.best/dt05_real_beamformit_5mics|1640|27119|91.7|7.1|1.2|0.6|8.9|57.6|
|decode_asr_asr_model_valid.acc.best/dt05_real_isolated_1ch_track|1640|27119|88.2|10.0|1.8|0.7|12.5|65.9|
|decode_asr_asr_model_valid.acc.best/dt05_simu_beamformit_2mics|1640|27120|89.3|9.1|1.6|0.7|11.4|64.9|
|decode_asr_asr_model_valid.acc.best/dt05_simu_beamformit_5mics|1640|27120|93.0|6.1|0.9|0.5|7.5|57.1|
|decode_asr_asr_model_valid.acc.best/dt05_simu_isolated_1ch_track|1640|27120|83.7|12.6|3.6|0.7|16.9|68.7|
|decode_asr_asr_model_valid.acc.best/et05_real_beamformit_2mics|1320|21409|89.7|8.9|1.3|0.6|10.9|70.1|
|decode_asr_asr_model_valid.acc.best/et05_real_beamformit_5mics|1320|21409|92.3|6.8|0.9|0.5|8.2|60.8|
|decode_asr_asr_model_valid.acc.best/et05_real_isolated_1ch_track|1320|21409|87.7|10.3|2.0|0.8|13.1|71.7|
|decode_asr_asr_model_valid.acc.best/et05_simu_beamformit_2mics|1320|21416|88.4|9.9|1.7|1.0|12.7|69.5|
|decode_asr_asr_model_valid.acc.best/et05_simu_beamformit_5mics|1320|21416|92.3|6.8|0.8|0.8|8.5|63.0|
|decode_asr_asr_model_valid.acc.best/et05_simu_isolated_1ch_track|1320|21416|82.6|13.6|3.8|1.0|18.4|71.4|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dt05_real_beamformit_2mics|1640|160390|95.4|2.2|2.4|1.0|5.6|58.5|
|decode_asr_asr_model_valid.acc.ave/dt05_real_beamformit_5mics|1640|160390|96.7|1.5|1.8|0.7|3.9|54.3|
|decode_asr_asr_model_valid.acc.ave/dt05_real_isolated_1ch_track|1640|160390|95.1|2.6|2.3|1.2|6.0|63.5|
|decode_asr_asr_model_valid.acc.ave/dt05_simu_beamformit_2mics|1640|160400|95.5|2.2|2.3|0.8|5.3|61.8|
|decode_asr_asr_model_valid.acc.ave/dt05_simu_beamformit_5mics|1640|160400|97.6|1.2|1.2|0.5|2.9|54.9|
|decode_asr_asr_model_valid.acc.ave/dt05_simu_isolated_1ch_track|1640|160400|91.8|4.1|4.2|1.3|9.5|67.1|
|decode_asr_asr_model_valid.acc.ave/et05_real_beamformit_2mics|1320|126796|96.3|1.9|1.8|0.7|4.4|66.7|
|decode_asr_asr_model_valid.acc.ave/et05_real_beamformit_5mics|1320|126796|97.5|1.2|1.3|0.6|3.1|58.0|
|decode_asr_asr_model_valid.acc.ave/et05_real_isolated_1ch_track|1320|126796|95.3|2.4|2.3|0.9|5.6|71.5|
|decode_asr_asr_model_valid.acc.ave/et05_simu_beamformit_2mics|1320|126812|95.3|2.2|2.5|1.0|5.7|66.4|
|decode_asr_asr_model_valid.acc.ave/et05_simu_beamformit_5mics|1320|126812|97.5|1.2|1.3|0.9|3.4|61.0|
|decode_asr_asr_model_valid.acc.ave/et05_simu_isolated_1ch_track|1320|126812|91.1|4.1|4.8|1.5|10.4|69.4|
|decode_asr_asr_model_valid.acc.best/dt05_real_beamformit_2mics|1640|160390|95.4|2.4|2.2|1.1|5.7|60.2|
|decode_asr_asr_model_valid.acc.best/dt05_real_beamformit_5mics|1640|160390|96.6|1.7|1.7|0.8|4.1|57.6|
|decode_asr_asr_model_valid.acc.best/dt05_real_isolated_1ch_track|1640|160390|94.9|2.6|2.6|1.0|6.2|65.9|
|decode_asr_asr_model_valid.acc.best/dt05_simu_beamformit_2mics|1640|160400|95.4|2.3|2.3|0.9|5.6|64.9|
|decode_asr_asr_model_valid.acc.best/dt05_simu_beamformit_5mics|1640|160400|97.6|1.2|1.2|0.6|3.0|57.1|
|decode_asr_asr_model_valid.acc.best/dt05_simu_isolated_1ch_track|1640|160400|91.4|4.0|4.6|1.2|9.7|68.7|
|decode_asr_asr_model_valid.acc.best/et05_real_beamformit_2mics|1320|126796|96.1|1.9|2.0|0.7|4.6|70.1|
|decode_asr_asr_model_valid.acc.best/et05_real_beamformit_5mics|1320|126796|97.4|1.3|1.3|0.6|3.2|60.8|
|decode_asr_asr_model_valid.acc.best/et05_real_isolated_1ch_track|1320|126796|95.1|2.4|2.5|0.9|5.8|71.7|
|decode_asr_asr_model_valid.acc.best/et05_simu_beamformit_2mics|1320|126812|95.0|2.4|2.6|1.1|6.1|69.5|
|decode_asr_asr_model_valid.acc.best/et05_simu_beamformit_5mics|1320|126812|97.4|1.3|1.3|0.9|3.6|63.0|
|decode_asr_asr_model_valid.acc.best/et05_simu_isolated_1ch_track|1320|126812|90.8|4.0|5.1|1.5|10.6|71.4|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_discrete_asr_e_branchformer_e12_mlp1024_linear1024_macaron_lr1e-4_warmup25k_conv1d1.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: sequence
valid_iterator_type: null
output_dir: exp/asr_train_discrete_asr_e_branchformer_e12_mlp1024_linear1024_macaron_lr1e-4_warmup25k_conv1d1_raw_wavlm_large_21_km1000_bpe_rm2000_char_ts_sp
ngpu: 1
seed: 2022
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 25
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 10000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_rm_wavlm_large_21_km1000_bpe2000_char_sp/train/text_shape.char
- exp/asr_stats_raw_rm_wavlm_large_21_km1000_bpe2000_char_sp/train/src_text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_rm_wavlm_large_21_km1000_bpe2000_char_sp/valid/text_shape.char
- exp/asr_stats_raw_rm_wavlm_large_21_km1000_bpe2000_char_sp/valid/src_text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 150
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/raw/tr05_multi_noisy_si284_sp/text.ts.en
- text
- text
- - dump/raw/tr05_multi_noisy_si284_sp/text.rm.wavlm_large_21_km1000
- src_text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dt05_multi_isolated_1ch_track/text.ts.en
- text
- text
- - dump/raw/dt05_multi_isolated_1ch_track/text.rm.wavlm_large_21_km1000
- src_text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.0001
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- <space>
- E
- T
- A
- O
- I
- N
- S
- R
- H
- D
- L
- C
- M
- U
- P
- F
- G
- Y
- B
- W
- V
- .
- K
- ''''
- X
- ','
- Q
- '-'
- J
- '"'
- <
- '>'
- Z
- '*'
- ':'
- (
- )
- '?'
- '&'
- ;
- '!'
- /
- '{'
- '}'
- '~'
- '`'
- _
- <sos/eos>
src_token_list:
- <blank>
- <unk>
- 僣
- 亯
- 仮
- 偁
- 冨
- 侐
- 僧
- 伧
- 儉
- 世
- 串
- 儐
- 件
- 乎
- 佹
- 偟
- 凤
- 下
- 亄
- 傯
- 伟
- 仛
- 乲
- 償
- 俎
- 儈
- 乢
- 倡
- 僛
- 丙
- 傿
- 俲
- 傳
- 倲
- 仒
- 偉
- 乷
- 儁
- 儀
- 中
- 买
- 伤
- 入
- 倇
- 佩
- 侲
- 丿
- 俀
- 乯
- 俺
- 倢
- 伶
- 冶
- 兣
- 凗
- 丹
- 僚
- 侵
- 伌
- 佨
- 侯
- 久
- 伹
- 僇
- 偖
- 倃
- 冒
- 仧
- 冼
- 俽
- 兎
- 仰
- 仹
- 僝
- 儘
- 不
- 侊
- 係
- 亷
- 丂
- 偌
- 七
- 俈
- 侽
- 主
- 丵
- 仔
- 仂
- 儠
- 倊
- 仉
- 兇
- 凈
- 兕
- 么
- 傆
- 傥
- 乆
- 仢
- 兰
- 丟
- 乻
- 亴
- 丩
- 仩儰
- 俆
- 之
- 倭
- 儱
- 俤
- 为
- 伕
- 冉
- 僷
- 业
- 兼
- 乙
- 偶俸
- 乭
- 傞
- 仸
- 亹
- 乶
- 俳
- 丏
- 俧
- 兺
- 傲
- 伍
- 偀
- 兑
- 佧
- 傋
- 儍
- 丘
- 凌
- 倷
- 仓
- 一
- 侀
- 像
- 凜
- 伖
- 傹
- 儭
- 亿
- 们
- 书
- 偛
- 亨
- 冭
- 儜
- 佱
- 丑
- 儺
- 丌像
- 佊亂
- 侫
- 仴儫
- 仱
- 丄
- 丫
- 侎
- 傤
- 伢
- 兗
- 佭
- 傕
- 九
- 冰
- 供
- 凁
- 偔
- 兝丆
- 儬
- 偻
- 俇乒
- 侜
- 丠
- 侼
- 侺
- 乤
- 亻
- 佺
- 凡
- 亏
- 乜
- 亘
- 兹
- 亠
- 儴
- 乞
- 亅
- 僘
- 兲
- 倠
- 东
- 几
- 偋
- 乂
- 侹亐俔
- 亐俔
- 乬
- 亸
- 偝
- 僿
- 偎
- 処
- 倪
- 丸
- 偦
- 乨
- 僗
- 冿
- 丛
- 乕
- 严
- 传
- 丧
- 乚
- 令
- 僌
- 兤
- 光
- 儑
- 倜
- 佻
- 丆
- 乷主
- 倐
- 凔
- 保
- 傜凁
- 偡
- 佋
- 七一
- 兄
- 伣
- 儧
- 倛
- 丨
- 乼
- 侹俔
- 临
- 丅
- 京
- 偮
- 兟
- 僠
- 兝
- 凋仂
- 兹么
- 偲
- 侃
- 净
- 会
- 佂
- 儫
- 丣仙
- 习
- 倴
- 僸侜
- 偢僄
- 侙
- 俉
- 亝
- 侻
- 俚
- 仃
- 兝乗
- 伬
- 偺
- 亰
- 乪
- 俹
- 不儠
- 凈乏
- 凃
- 侱
- 乓倎
- 偱
- 佳
- 伩
- 侮
- 偹
- 乇
- 偮互
- 伄
- 凕
- 俟
- 仴
- 互
- 傇
- 僜
- 俭
- 俫
- 儿
- 丽儑
- 儭丵
- 佚
- 偊
- 侑丨
- 倱
- 侾
- 僿亸
- 伒
- 兊倐
- 免
- 乏
- 侳
- 伉
- 冬
- 俢
- 价
- 債
- 俆伩
- 仰亝
- 佝
- 乫
- 兜冖典
- 乐
- 两
- 乄
- 傭
- 乔
- 凈僌
- 倰
- 亓
- 倌
- 偳
- 侑
- 倅
- 償傿
- 俔
- 傩乘儮
- 偾
- 些
- 冂
- 傾
- 佒
- 佡亂
- 偷
- 伵
- 偏
- 倬
- 傽
- 仗
- 军
- 丷
- 丶
- 乣其
- 假
- 乜佝
- 仨
- 乡
- 倬仢
- 僸
- 伅
- 儂伊乴
- 僺
- 僾
- 乇伻
- 兮倨
- 凥侵
- 亢个仓
- 伀
- 偩
- 倅仚傕
- 偤
- 仄
- 亮
- ▁伋丕兙
- 儮
- 儣
- 丗
- 儞
- 倿
- 倎
- 侑丫
- 亟
- 僥
- 伿
- 僽
- 凛
- 仡
- 也
- 亂
- 側
- 亲
- 侌
- 俖
- 亐
- 俒
- 乃傺
- 今
- 偆
- 乄传
- 僫傤
- 丁
- 儲
- 兩
- 佉佤
- 亱
- 佒傘
- 兊
- 傉
- 儻
- 傝
- 仡倓俣
- 乩
- 儂佢伊乴
- 伇
- 侉
- 乾
- 丬
- 井侺
- 仟
- 乑
- 丌
- 丣
- 依
- 伴
- 佥
- 儻伀
- 伙佾
- 侹亐
- 乘
- 伎
- 人
- 俩
- 供僜億丅
- 儖丬
- 亨互
- 丢側
- 僺亓
- 仦义
- 倕
- 仙
- 享
- 佡
- 傜
- 乯侊
- 丽
- 冡
- 仐会
- 乩傹
- 兙
- 偈
- ▁並作丰
- 伶兕
- 凛倚
- 亊
- 伇佧
- 佤
- 俛举
- 伛
- 傰
- 乫东
- 俈乜
- 不儱
- 偼
- 傣
- 僃
- 佂仞
- 似
- 倜凔
- 井
- 乌侬佘
- 侚
- 乁傎亥
- 佊佡伞
- 僇傲
- 伏
- 偶
- 偷俄
- 丢也
- 七佭
- 专
- 元丠
- 伞
- 倒
- 佪
- 両
- 傏
- 丸佹
- 供僜
- 倘
- 俣
- 俉倕
- 倜俤
- 傉丵
- 倚
- 俭侰
- 以侬佘
- 乔亠
- 傦亜俦
- 倇侯
- 倅仚
- 侕
- 任
- 亜俦
- 凘
- 俪伃伕
- 侣俉
- 両亽
- 儊
- 丢
- 凥
- 侼仓
- 儖
- 倨
- 什
- 佄
- 僯
- 丸乎
- 乖
- 俄
- 佊
- 乣
- 丑丏
- 伸
- 兡
- 丐偊冡侮
- 俵俁
- 五
- 们凌
- 乍
- 丞
- ▁伋丕
- 允
- 伏允
- 儧乆
- 倵
- 净伧
- 偝伧
- 僤仂
- 倭伏书
- 五偝
- 任偩
- 乒
- 凡兇
- 元
- 偓偾
- 俳亖
- 傡
- 仈
- 供乨
- 俫兼
- 亇俖
- 侭
- 乁侪冲
- 兆
- 佄亴
- 債俢
- 侹
- 乱侼
- 倰兡
- 傔
- 儵
- 佷仢
- 信
- 倩
- 亇
- 侓
- 凍
- 僫傰
- 儾
- 俳克
- 丐偊冡
- 傶使
- 侃傱丮
- 僜億丅
- 亴伕
- 以乌侬
- 仐
- 佢伊乴
- 丼
- 丶傱丮
- 傓
- 做傲
- 仳
- 仪
- 丷侙
- 們
- 做
- 働
- 俇
- 佂冬
- 便
- 偄
- 倷丷侙
- 佞
- 偤偋
- 乧俜
- 倆
- 元伎
- 偛僸
- 偢
- 仉亖
- 临允
- 久伟
- 三
- 丐
- 侣俉倕
- 倏
- 乛冁
- 令侼
- 之伖
- 佦
- 兕伶
- 减
- 做兟
- 京伧京
- 佒亯傘
- 几侭
- 似仗
- 亀乕
- 亽
- 乐偏
- 乢冉
- 仴俔
- 偁侀
- 儸
- 亗佁兽且偵乊倘乳予
- 冚
- 凂
- 俌些丅
- 乬傥
- 乌侬
- 仝
- 俌億丅
- 偰
- 兒
- 仇乑
- 傱丮
- 偫
- 倛偄
- 倵僫傤
- 亯侀
- 儴偻
- 偛僌
- 傶使乖
- 再
- 偷僣俄
- 亶
- 偻倴
- 倯
- 今冝
- 兜
- 凢
- 倣其
- 凝
- 兖
- 供僜億些
- 偆冂
- 俜
- 仏兖凍
- 僾侩
- 体
- 仡倓
- 佷倬
- 倓
- 仉仢
- 俅
- 侧
- 亣仃
- 冉乢
- 冇
- 儻伀乪
- 之俤
- 乄佉佤
- 丂下
- 俅凝
- 伏书
- 僤
- 乧
- 儒
- 侃再
- 乽
- 偝五
- 伛傺
- 凤俺
- 儕亙佘
- 乧倰
- 俦
- 付
- 世傯
- 以乌侬佘
- 伾
- ▁並兆
- 佊佡
- 侑倠
- 儹
- 倧
- 丆乗
- 伡伥凓
- 倬俲
- 减为
- 亢个
- 侎件
- 俌些
- 侣倕
- 兩依
- 傘
- 凃亠
- 以
- 丶傱
- 佉
- 倭伏允
- 乞亴
- 価倉
- 傔义
- 乗
- ▁具作丰
- 僾井傉
- 伬丏伬
- 佔
- 佡倱
- 佱倒
- 倐兺
- 伻
- 仩
- 儻亗
- 佢乴
- 侎僧
- 僔伴
- 傽亠
- 傠举
- 偧
- 兎丹
- 临允书
- 佷
- 兣亻
- 丘兜冖典
- 僊
- 乜凂倌
- 伸似仗
- 亁偺
- 凋
- 倓俣
- 偠
- 侧亽
- 亘俄
- 佷侨
- 佸什
- 俗万
- 凕伒
- 凂倌
- 俘
- 償儠
- 倵僫傰
- 佢
- 僽偾
- 伂
- 丞乎余丈亁
- 亹亻
- 伖丙
- 乭伀
- ▁並作
- 伛侕于
- 减乑
- 似丠
- 偒
- 你
- 亇倩
- 典
- 兲凃
- 仄偼乆
- 伯侩傣俩
- 俛举凈
- 傩
- 佂仞偩
- 仃佞
- 僷乻
- 凓
- 僫
- 傌
- 仄偼
- 僣俄
- 丐偊
- 乡俖
- 井傉
- 与
- 似傭
- 偵
- 兜充佰
- 乄丨
- 亇伸
- 伸倩
- 乮
- 佭佥
- ▁具丕
- 儞乕
- 僛侊
- 侃傱
- 五偝五偝
- 乶儧
- 亊冭
- 元仗
- 倴偻
- 俯
- 元傭
- 伱偼
- 佬乆
- 冓
- 伔
- 伈
- 伢儕亙佘
- 仜
- 万
- 俫冁
- 佡伞
- 僜億些
- 休倂
- 亣
- 凒
- 俵俁儁
- 丐偊侘
- 佻买
- 亿件
- 侫偁侫
- 亏倣其
- 俊
- 侈
- 俌億
- 佀
- 佱仃佞
- 兤僗
- 僣倛
- 儯俫
- 乧倰兡
- 兝伞
- 倷丷
- 両亽儁
- 兢
- 亖
- 偯伺
- 侲倓俣
- 俤佹
- 仏
- 丐偊侘侮
- 偕
- 個儧
- 佭俅凝
- 丑伧
- 乄偱
- 仠
- 俖丠
- 丁伧丁
- 仪倧亚
- 僞
- 乴
- 丂丿
- 伱
- 俯俫
- 亹冉
- 俾
- 佊佡亂
- 偵倚
- 冒冶
- 伖佹
- 兊乡
- 仪倧
- 丸亯
- 偠丄偠
- 仞
- 俴
- 偛僸侜
- 你俅凝
- 体倰兡
- 亞
- 俭冷
- 乄侑
- 亇伸似仗
- 儯乛冁
- 乜凂
- 傆冰
- 僾井侺
- 佭乫东
- 兪
- 丮
- 儂伊
- 億
- 億些
- 俤丸
- 傠举凈
- 伯侩
- 偁侮
- 侌军
- 侀偁
- 倆僉亵
- 伷
- 倠仃
- 偂促佾
- 亯亁
- 侨
- 俳亖乪
- 僷买
- 偔之
- 僚仸
- 兮
- 乽儯乛冁
- 偋亘
- 亾
- 偶俸之
- 亣侚丗
- 傤兊
- 処凢
- 亹冉亹
- 偐俋
- 允书
- 倬侨
- 伳
- 侅
- 佳低
- 再上仝
- 亯傘
- 侎儉
- 冁
- 佬儧乆
- 伯
- 与准侻
- 凔儺
- 乾俣
- 傽书
- 亻兣
- 乂儵
- 仉克
- 亙
- 們什
- 倝
- 也側
- 丞余丈亁
- 佭俅
- 傶
- 個
- 傾儺
- 兠
- 侽侯
- 佻兎
- 伀乪
- 侣
- 儯
- 乣亪
- 仟佋
- 冶伟
- 傩乘
- 仮侯
- 供僜億些丅
- 儁倜
- 仿
- 写
- 僑
- 偬
- 働互
- 二佻
- 傱
- 偌伎
- 侈儁
- 儲兕
- 亣侚佞
- 俌
- 万俈
- 乭伀乪
- 乡倩
- 光侲
- 亇俖丠
- 亇伸倩
- 佭仛
- 亁
- 佒偁傘
- 傴俼
- 丛兄
- 乱
- 亷佩
- 仵
- 冔
- 丹伶
- 俸
- 倴乷
- 之凔
- 减乑为
- 侘
- 兄丛兄
- 乓
- 丸丙
- 丂伶
- 偂伔倰兡
- 亵
- 伿偩
- 俗万俈
- 俐今
- 丑丏丑丏
- 傡偎
- 佱亣
- 俨
- 儈儐
- ▁具作兆
- 乁侪
- 兄丛
- 倅傕
- 与准
- 傹佺
- 偝五偝
- 儬仙
- 侚佞
- 備
- 傽傤
- 儛
- 兖凍
- 从
- 僫亠
- 俚冉
- 倒伆値
- 俚冉俚
- 乑为
- 傡亵
- 來
- 冉佩
- 偼乆
- 丰
- 偻儴
- 佬
- 偠丄
- 僉傷
- 倵僫
- 七仛
- 偂
- 凕偵倚
- 丘仚
- 俱
- 乻伶
- 儗傓
- 亽儁
- 侠
- 之侎
- 偌傭
- 从僱乮
- 傶使乖乼
- 伆値
- 亏凢
- 冖典
- 丠伎
- 侕于
- 丝
- 凈乏侜
- 公侴
- 倊亸
- 丁偁丁
- 傛
- 光仡乾
- 佷倬仢
- 乂倪
- 亏処
- 乼伧乼
- 丣傜凁
- 令今
- 今仓
- 僔侻
- 処凢兖凍
- 丑丏丑
- 亿僧
- 僄
- 倘丣
- 侰
- 亹兣
- 冭伏书
- 买伶
- 仿儭
- 乘儮
- 严倷
- 乺仆
- 亣侚債
- 侮侀
- 俐个
- 伛侕
- 兝丆乗
- 亢
- 倴偻倴
- 儸仟
- 佖
- 侃丮
- 傧
- 傎亥
- 僔
- 亚
- 佯
- 人働互
- 亣仃佞
- 凜倅仚傕
- 傡乕
- 乃
- 冷
- 丛仨
- 侄傓
- 偄保
- 于
- 亸兩依
- 佩乢
- 傅
- 倭临
- 伧佒亯傘
- 偛凈
- 乢俟
- 侏
- 偙
- 伉偏
- 傇儘
- 内
- 仕乀
- 偗
- 凢兖凍
- 九僧
- ▁
- 俐
- 伡
- 凡兇僷
- 偄保仟
- 争
- 侎儐
- 冭允
- 佳傫
- 兹俈
- 佺伧佺
- 傉任
- 僣侯
- 再傸
- 五净
- 佀儒冣
- 伿傆
- 侚丗
- 丯亶冥佾
- 侩傣俩
- 僲
- 佭乫
- 丹倇
- 仲
- 偌仗
- 九儉
- 凗丷
- 佛
- 丢也側
- 俹傥
- 侃再上仝
- 僿亸亝
- 俦儮
- 優乿
- 兀
- 九儐
- 偂伔倰
- 傣俩
- 僇冭
- 丟冂
- 仇
- 兣亹
- 侭側
- 侧亽伿
- 儹偲
- 凈偀
- 冚俨傝儖丬
- 侁凞亩
- 俐今冝
- 伖乭
- 余丈亁
- 儻亗佁兽且偵乊倘乳予凡
- 乃于
- 二
- 兲凃亠
- 亪
- 亯侮
- 佈
- 五伧
- 凤傯
- 兽
- 倣
- 仒傋
- 傽伅
- 倭伏
- 伱偼乆
- 倒偦
- 侇
- 僁
- 丁伧
- 儵偠
- 偼倭
- 乕偛
- 仪亚
- 俢佔
- 事
- 侱倭
- 七佭佥
- 冝
- 亇俖倩
- 临允乚
- 兒偄
- 凌侒侺
- 僗俭
- 儸佋
- 佐
- 乌
- 仉凘佪
- 侔
- 侀傞
- 俻
- 冝偩
- 俍佴仅
- 仪倧亚倆
- 丏丑丏
- 倜傾
- 共
- 儻乪
- 侫伧侫
- 儸倢儸
- 傍
- 俛傠举
- 価
- 佮俑佮
- 乳
- 余丈
- 倡倃
- 伧丁
- 僛伟
- 佗
- 俗
- 低
- 其
- 亏乣亪
- 俚俟
- 倵乔
- 傾儺互
- 丈
- 主偌
- 儂
- 供僜億
- 佄亴伕
- 丯亶冥
- 俉倕丶傱丮
- 佳冊但
- 俖倩
- 亏仏
- 亻亹亻
- 凐
- 乹
- 仪倧亚倆僉傷
- 丒
- 丄偠
- 侍
- 佱侚債
- 伹傿
- 偊侘
- 凤傿
- 传于
- 亸亝
- 凣
- 乨億些
- 偯伺亽
- 亴兼
- 七佥
- 偂伔
- 之傾
- 俈凈
- 伈乞丟
- 俳乧俜
- 佂仞乏
- 仏凍
- 乭亖
- 乏侜
- 伳什
- 侦
- 冨主
- 償俎
- 傽允
- 僴
- 俫亅
- 儺互
- 兞
- 丣仙偶俸
- 儻価倉
- 俍僐
- 偹关
- 僚亻
- 丑佺
- 偆佝
- 仁俋
- 伂俐个
- 仧入
- 买下
- 倅仚侖儑
- 侁
- 丹侽
- 佋乬
- 亣侚債俢
- 伏允书
- 冥
- 亜
- 偛僸仴儫
- 僑儑
- 僿亝
- 兇僷
- 伸仗
- ▁具作兒
- 假亯假
- 俍
- 伀亗
- 仟乬
- 丞余丈
- 佌
- 丯
- 中仸
- 佀冣
- 冶侊
- ▁並作兆丰
- 傦亜
- 丑丏丑丏丑丏
- 倧亚
- 倳
- 倍
- 佑
- 偤仨
- ▁伋丕亯兙
- 侣俉倕丶傱丮
- 亲丵
- 仿儭丵
- 傥倴
- 何
- 凌休倂
- 伣僽
- 农傓
- 三乺侍二
- 傳儈
- 使乖
- 上
- 佦乖
- 侖
- 冏
- 佾
- 伎佚
- 倵乔亠
- 乽儯俫
- 侩
- 关儭
- 丣傜
- 仕乀仕傩乘儮丒
- 僗侁
- 不伣
- 僾伯侩
- 倹
- 俍俰
- 仧丂
- 乭克
- 关
- 伢儕亙
- 侯主
- 以侬
- 処仏
- 亣侚
- 亿儉
- 仩侻
- 丘凌
- 丐偊冡偁侮
- 伹侎
- 亃
- 佸
- 使
- 伳偯伺亽
- 僛亨
- 乓倎丵
- 傏伫乆
- 乼偁乼
- 乁傎
- 净伧净伧净
- 伖乎
- 伬伧伬
- 俛傠举凈
- 俉丶
- 俭侏
- 克
- 偠亯偠
- 仇凝
- 係仧
- 儸倢
- 倠仃佞
- 侉佭
- 傰倐
- 丕
- 冚傝儖丬
- 况
- 侞
- 傺
- 兾
- 仹俺
- 乬俹
- 偏侀
- 傁
- 偌丠
- 僝亷
- 伐
- 偄仟
- 予
- 七乫东
- 仞乏
- 佀冣冿侵
- 兤俦
- 偓乔偾
- 伲傓
- 交
- 冇二
- 几侭侾
- 乐俧
- 佟
- 丞乖
- 兣亻兣亻
- 傠
- 們俨傝儖丬
- ▁並作兒
- 倥倎
- 仄万
- 俭仢
- 乨俌些丅
- 佘
- 全
- 亼
- 凄
- 乡俖丠
- 傖
- 亚倆僉傷
- 乨俌些
- 丂仧
- 仧儈
- 亸兩
- 乇伻偠
- 僾傉
- 以乌
- 侲倓
- 亢俐个
- 們俨
- 儕亙
- 冈
- 儏佧
- 乸
- 儻伀倰倏
- 伄以侬佘
- 倧亚倆僉傷亵
- 佁兽且
- 净偁净
- 佅儭
- 倵僫亠
- 凗丷侙
- 僸仴儫
- 丂主
- 佊亂倱
- 修
- 侭兑
- 凜伙佾
- 佺偁佺
- 偻倴偻倴
- 亿僛
- 仁
- 倭允
- 丢兑
- 减乑乚
- 儈俀
- 侢
- 乡俖倩
- 伸似傭
- 傶使侳
- 兊兺
- 乊
- 佂仞偩佦
- 乻兎
- 偛偀
- 乨乁傎亥
- 凜倅仚
- 乜倌
- 冂乏
- 儯俫冁
- 五净五
- 俈佝
- 七一亱
- 借
- 兠万兹
- 佊亂休倂
- 儃
- 僃仰
- 僒
- 冏仫冏
- 倝仲兀兢侌
- 亖倝偸兀兢侌三乺仆二
- 傔义僸
- 偋偷
- 仡乾
- 俉倕丶
- 他
- 丌像公侴
- 伙
- 丞乎余丈
- 倵傤
- 仉亖乪
- 仦
- 佥于
- 任佻
- 冭允书
- 例佧
- 云
- 八傃便
- 億丅
- 义
- 亻兣亻
- 亱俬
- 余
- 充
- 俬
- 先
- 一佹亱
- 佛儧
- 佊佡倱
- 仭
- 兰主
- 偛儠
- 伥
- 佁
- 儺倬仢
- 佸俨傝儖丬
- ▁具兆
- 倾
- 儂佢
- 兇兩
- 仍
- 乁
- 侵佥
- 丳
- 仏俍
- 准僥
- 侧僯
- 冣
- 住
- 倊亸亝
- 俙
- 偄儸
- 倫
- 习丶
- 他儁
- 乩傹乩傹
- 倣乣其
- 伳什偯伺亽
- 倵亠
- 丢倯
- 凜仚傕
- 伄俦
- 伕伏允
- 俳亖傏
- 佀儒冿
- 佄乞亴
- 傫
- 亭
- 倧亚倆
- 亀亵
- 供俌些
- 倜伖
- 冪
- 個儧乆
- 僶
- 伎僔侻
- 伛乃傺
- 从侔乮
- 偓乔
- 伄以乌侬佘
- 僜億
- 僉
- 傛俯俫
- 仉傏
- 冉俚冉
- 冊但
- 侬佘
- 人傰
- 傾偮
- 伇佧伕
- 乄佉
- 丑偁丏
- 侤
- 凑
- 侌军偦
- 停
- 仼亃八傃便亀
- 侥
- 佅亲
- 京偁京
- 僱
- 亀
- 伂兛僵
- 乄侑丨
- 倧亶倆僉傷亵
- 乧伥凓
- 俶
- 乁侪冲们倬
- 代
- 举
- 傭偧乆
- 侫亯侫
- ▁伋丕偁兙
- 俚亻
- 光倓俣
- 倬俕丳偣丷伂兛僵伖
- 人働
- 丷傇
- 伇偲
- 俍俒
- 俷
- 仏俍佴仅
- 几侭兑
- 仱佡伞
- 兊乡俖
- 偲伿
- 佱亣侚債
- 丐偊冡亯侮
- 偓僽偾
- 侌倒偦
- 儯俫亅
- 僾井侩
- 乧伡伥凓
- 儼
- 们凌侒侺
- 佷倬侨
- 倪丄倪
- 儢
- 倮
- 仾
- 佊伞
- 儯乛
- 凌侒
- 伊
- 亀偎
- 偡傲
- 伣僽倲
- 伇佧偲
- 佢伊
- 份冭
- 佱亣侚債俢
- 丌公侴
- 亣侚俢
- 体倰
- 僼
- 儻亗佁兽且偵乊倘乳予么亸
- 冉俚
- 仞偩
- 儋
- 伬丑伬
- 伸倩似仗
- 佊丆
- 仕
- 偵乊倘乳予
- 倭伏允书
- 僽乚
- 冪儑
- 丫乣亪
- 丢侭
- 丠与准侻
- 亗
- 傏伫
- 乨乁侪冲
- 兮倨众
- 丌倐
- 俭乌侬佘
- 倈
- 儼亴
- 伡伥
- 仑
- 兲凃乔
- 兲凃乔亠
- 丹伶丹
- 仏兖俘
- 傎
- ▁具作兆丰
- 儴倢儴
- 偵乊倘
- 佬儧
- 准侻
- 偷僣偷俄
- 丐倪
- 伣僽偾
- 七佭乫东
- 五偝伧偝
- 丂下丂
- 儇
- 乷仮主
- 冮
- 倜九
- 冣冿
- 倹乹倹兟
- 一亱
- ▁伋丕兙亯兙
- 代万
- 冋
- 儵丄儵
- 佂仞僺亓
- 丰倛
- 从僱乮倊
- 兵
- 価侟儅
- 丑伧丏
- 侟儅
- 儻伡伥凓
- 么侭侾
- 僢
- 产佧
- 估
- 仫
- 偊冡侮
- 佅
- 佦乖乼
- 伛乃
- 佴
- 儕
- 佱亣侚佞
- 僟
- 佳冟伦
- 倷丷傇
- 凖侼
- 伯侩傣俩乔亠
- 偓
- 兴
- 侶傓
- 佰
- 丯冥
- 仪倧亶倆
- 仉克傕
- 偺亯偺
- 亏倣乣其
- 乄偱倕
- 儲偺
- 亻佩亻
- 乺
- 仑亡
- 仏兖
- 倍减为
- ▁伋丕兙偁兙
- 俍凑
- 傾儺偮互
- 伛乃于
- 再上
- 五偝伧五
- 儦
- 儶
- 准
- 倉
- 份
- 冱
- 册
- 僈
- 亳
- 俥
- 侸
- 倦
- 促
- 个
- 众
- 六
- 偸
- 僐
- 决
- 倻
- 仯
- 侶
- 偃
- 偽
- 倀
- 儷
- 冐
- 伜
- 冸
- 倔
- 儌
- 且
- 俠
- 倖
- 侬
- 值
- 併
- 偯
- 儰
- 俰
- 傢
- 儽
- 價
- 僋
- 倁
- 傃
- 侒
- 储
- 佃
- 傂
- 偘
- 伨
- 佣
- 凞
- 僩
- 亦
- 儤
- 傸
- 傦
- 偍
- 乛
- 佽
- 仼
- 傼
- 但
- 冽
- 儝
- 农
- 倞
- 侪
- 兘
- 俏
- 偨
- 伲
- 债
- 凇
- 僂
- 伃
- 介
- 俕
- 丱
- 偣
- 佲
- 佫
- 傈
- 佶
- 亍
- 例
- 偪
- 俛
- 伮
- 兔
- 傗
- 兿
- 了
- 伫
- 儥
- 傑
- 冖
- 亡
- 傟
- 俑
- 仅
- 兂
- 倂
- 俓
- 僓
- 乵
- 偐
- 侄
- 偑
- 冹
- 冀
- 僆
- 凖
- 乥
- 伝
- 僖
- 傚
- 倗
- 冑
- 佮
- 产
- 仆
- 倄
- 冃
- 俞
- 儔
- 伺
- 健
- 偿
- 佼
- 候
- 儨
- 僀
- 优
- 儩
- 仚
- 僦
- 冞
- 傪
- 冩
- 仺
- 兛
- 儏
- 僬
- 八
- 乿
- 俵
- 內
- 俿
- 円
- 僳
- 乀
- 冦
- 冢
- 冻
- 冲
- 凧
- 僕
- 伪
- 儳
- 僎
- 兌
- 凅
- 傒
- 企
- 冺
- 作
- 冗
- 冊
- 偭
- 侷
- 凚
- 僻
- 傴
- 俪
- 傻
- 亥
- 俋
- 傀
- 値
- 侂
- 優
- 凟
- 冄
- 僵
- 公
- 傊
- 傷
- 俼
- 凉
- 儅
- 仌
- 冫
- 倸
- 傄
- 伦
- 休
- 冎
- 冧
- 儗
- 儎
- 傮
- 亩
- 兏
- 僭
- 凙
- 冘
- 倶
- 倽
- 僮
- 冟
- 倥
- 冾
- 傐
- 兓
- 凎
- 侴
- 伽
- 儆
- 伆
- 僨
- 伭
- 児
- 兯
- 催
- 养
- 儚
- 兦
- 儙
- 俁
- 偅
- 侗
- 冯
- 伓
- 僅
- 儡
- 凊
- 冴
- 冕
- 僰
- 儓
- 僪
- 冤
- 佇
- 僙
- 侟
- 偞
- 党
- 冠
- 位
- 具
- 並
- 伋
- 僡
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
use_preprocessor: true
token_type: char
src_token_type: bpe
bpemodel: null
src_bpemodel: data/token_list/src_bpe_unigram2000_rm_wavlm_large_21_km1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
tokenizer_encode_conf: null
src_tokenizer_encode_conf:
enable_sampling: true
alpha: 0.4
nbest_size: -1
frontend: embed
frontend_conf:
embed_dim: 512
positional_dropout_rate: 0.1
specaug: specaug
specaug_conf:
apply_time_warp: false
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: false
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
preencoder: null
preencoder_conf: {}
encoder: e_branchformer
encoder_conf:
output_size: 256
attention_heads: 4
attention_layer_type: rel_selfattn
pos_enc_layer_type: rel_pos
rel_pos_type: latest
cgmlp_linear_units: 1024
cgmlp_conv_kernel: 31
use_linear_after_conv: false
gate_activation: identity
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv1d1
layer_drop_rate: 0.0
linear_units: 1024
positionwise_layer_type: linear
use_ffn: true
macaron_ffn: true
merge_conv_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
model: discrete_asr
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
required:
- output_dir
- src_token_list
- token_list
version: '202308'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mgmeskill/rl_course_vizdoom_health_gathering_supreme | mgmeskill | 2023-09-15T22:44:36Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T22:44:26Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.94 +/- 4.18
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r mgmeskill/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
salim4n/Taxi-v3 | salim4n | 2023-09-15T21:48:16Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T21:48:12Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="salim4n/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
salim4n/q-FrozenLake-v1-4x4-noSlippery | salim4n | 2023-09-15T21:45:08Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T21:45:03Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="salim4n/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hmbyt5-preliminary/byt5-small-english-german | hmbyt5-preliminary | 2023-09-15T21:11:03Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"en",
"de",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-04-11T09:31:37Z | ---
license: mit
language:
- en
- de
---
# hmByT5 - Preliminary Language Models
Preliminary Historic Multilingual and Monolingual ByT5 Models. Following languages are currently covered:
* English (British Library Corpus - Books)
* German (Europeana Newspaper)
More details can be found in [our GitHub repository](https://github.com/stefan-it/hmByT5).
# Pretraining
We use the official JAX/FLAX example in Hugging Face Transformers to pretrain a ByT5 model on a single v3-8 TPU.
Details about the training can be found [here](https://github.com/stefan-it/hmByT5/tree/main/hmbyt5-flax).
# Evaluation on Downstream Tasks (NER)
We evaluated the hmByT5 model on downstream tasks:
| Model | English AjMC | German AjMC | French AjMC | Finnish NewsEye | Swedish NewsEye | Dutch ICDAR | French ICDAR | Avg. |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|--------------|--------------|-----------------|-----------------|--------------|--------------|------|
| [`hmbyt5-preliminary/byt5-small-english-german`](https://huggingface.co/hmbyt5-preliminary/byt5-small-english-german) | 85.74 ± 0.72 | 87.45 ± 0.67 | 84.23 ± 0.65 | | | | | |
# Acknowledgements
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
MohannadTak/ppo-LunarLander-v2-1e6 | MohannadTak | 2023-09-15T20:53:14Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T20:52:53Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.71 +/- 20.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vaiana/a2c-PandaReachDense-v3 | vaiana | 2023-09-15T20:47:29Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T20:41:58Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.23 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PygmalionAI/mythalion-13b | PygmalionAI | 2023-09-15T20:30:08Z | 2,716 | 158 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text generation",
"instruct",
"en",
"dataset:PygmalionAI/PIPPA",
"dataset:Open-Orca/OpenOrca",
"dataset:Norquinal/claude_multiround_chat_30k",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"dataset:databricks/databricks-dolly-15k",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-05T12:45:18Z | ---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
license: llama2
datasets:
- PygmalionAI/PIPPA
- Open-Orca/OpenOrca
- Norquinal/claude_multiround_chat_30k
- jondurbin/airoboros-gpt4-1.4.1
- databricks/databricks-dolly-15k
---
<h1 style="text-align: center">Mythalion 13B</h1>
<h2 style="text-align: center">A merge of Pygmalion-2 13B and MythoMax 13B</h2>
## Model Details
The long-awaited release of our new models based on Llama-2 is finally here. This model was created in
collaboration with [Gryphe](https://huggingface.co/Gryphe), a mixture of our [Pygmalion-2 13B](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
and Gryphe's [Mythomax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b).
Finer details of the merge are available in [our blogpost](https://pygmalionai.github.io/blog/posts/introducing_pygmalion_2/#mythalion-13b).
According to our testers, this model seems to outperform MythoMax in RP/Chat. **Please make sure you follow the recommended
generation settings for SillyTavern [here](https://pygmalionai.github.io/blog/posts/introducing_pygmalion_2/#sillytavern) for
the best results!**
This model is freely available for both commercial and non-commercial use, as per the Llama-2 license.
## Prompting
This model can be prompted using both the Alpaca and [Pygmalion formatting](https://huggingface.co/PygmalionAI/pygmalion-2-13b#prompting).
**Alpaca formatting**:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
**Pygmalion/Metharme formatting**:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
<|user|>Hello!<|model|>{model's response goes here}
```
The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to
form a conversation history.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
## Acknowledgements
We would like to thank [SpicyChat](https://spicychat.ai/) for sponsoring the training for the [Pygmalion-2 13B](https://huggingface.co/PygmalionAI/pygmalion-2-13b) model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
Model-SafeTensors/mythalion-13b | Model-SafeTensors | 2023-09-15T20:30:08Z | 13 | 0 | null | [
"pytorch",
"safetensors",
"llama",
"text generation",
"instruct",
"text-generation",
"en",
"dataset:PygmalionAI/PIPPA",
"dataset:Open-Orca/OpenOrca",
"dataset:Norquinal/claude_multiround_chat_30k",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"dataset:databricks/databricks-dolly-15k",
"license:llama2",
"region:us"
]
| text-generation | 2024-11-19T00:32:28Z | ---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
license: llama2
datasets:
- PygmalionAI/PIPPA
- Open-Orca/OpenOrca
- Norquinal/claude_multiround_chat_30k
- jondurbin/airoboros-gpt4-1.4.1
- databricks/databricks-dolly-15k
---
<h1 style="text-align: center">Mythalion 13B</h1>
<h2 style="text-align: center">A merge of Pygmalion-2 13B and MythoMax 13B</h2>
## Model Details
The long-awaited release of our new models based on Llama-2 is finally here. This model was created in
collaboration with [Gryphe](https://huggingface.co/Gryphe), a mixture of our [Pygmalion-2 13B](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
and Gryphe's [Mythomax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b).
Finer details of the merge are available in [our blogpost](https://pygmalionai.github.io/blog/posts/introducing_pygmalion_2/#mythalion-13b).
According to our testers, this model seems to outperform MythoMax in RP/Chat. **Please make sure you follow the recommended
generation settings for SillyTavern [here](https://pygmalionai.github.io/blog/posts/introducing_pygmalion_2/#sillytavern) for
the best results!**
This model is freely available for both commercial and non-commercial use, as per the Llama-2 license.
## Prompting
This model can be prompted using both the Alpaca and [Pygmalion formatting](https://huggingface.co/PygmalionAI/pygmalion-2-13b#prompting).
**Alpaca formatting**:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
**Pygmalion/Metharme formatting**:
```
<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:
{{persona}}
You shall reply to the user while staying in character, and generate long responses.
<|user|>Hello!<|model|>{model's response goes here}
```
The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input.
The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to
form a conversation history.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
## Acknowledgements
We would like to thank [SpicyChat](https://spicychat.ai/) for sponsoring the training for the [Pygmalion-2 13B](https://huggingface.co/PygmalionAI/pygmalion-2-13b) model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
QMB15/Mythomax-L2-13B-8bit-exl2 | QMB15 | 2023-09-15T20:29:01Z | 8 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-15T19:23:58Z | ---
license: other
language:
- en
---
This is an exllama V2 quantization of https://huggingface.co/Gryphe/MythoMax-L2-13b
Uses a target bpw of 8, intended for best quality on cards like a 3090 or similar.
Includes measurement.json for convenience of quantizing to other sizes.
Calibration data: https://huggingface.co/datasets/wikitext/resolve/refs%2Fconvert%2Fparquet/wikitext-2-v1/test/0000.parquet
An improved, potentially even perfected variant of MythoMix, my [MythoLogic-L2](https://huggingface.co/Gryphe/MythoLogic-L2-13b) and [Huginn](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16) merge using a highly experimental tensor type merge technique. The main difference with MythoMix is that I allowed more of Huginn to intermingle with the single tensors located at the front and end of a model, resulting in increased coherency across the entire structure.
The script and the acccompanying templates I used to produce both can [be found here](https://github.com/Gryphe/BlockMerge_Gradient/tree/main/YAML).
This model is proficient at both roleplaying and storywriting due to its unique nature.
Quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) (You're the best!)
## Model details
The idea behind this merge is that each layer is composed of several tensors, which are in turn responsible for specific functions. Using MythoLogic-L2's robust understanding as its input and Huginn's extensive writing capability as its output seems to have resulted in a model that exceeds at both, confirming my theory. (More details to be released at a later time)
This type of merge is incapable of being illustrated, as each of its 363 tensors had an unique ratio applied to it. As with my prior merges, gradients were part of these ratios to further finetune its behaviour.
## Prompt Format
This model primarily uses Alpaca formatting, so for optimal model performance, use:
```
<System prompt/Character Card>
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
---
license: other
--- |
QMB15/Stheno-L2-13B-8bit-exl2 | QMB15 | 2023-09-15T20:28:39Z | 10 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-15T20:07:08Z | ---
license: llama2
language:
- en
---
This is a exllama V2 quantization of https://huggingface.co/TheBloke/Stheno-L2-13B-GPTQ
Uses a target bpw of 8, intended for best quality on cards like a 3090 or similar.
Includes measurement.json for convenience of quantizing to other sizes.
Calibration data: https://huggingface.co/datasets/wikitext/resolve/refs%2Fconvert%2Fparquet/wikitext-2-v1/test/0000.parquet
<img src="https://w.forfun.com/fetch/cb/cba2205390e517bea1ea60ca0b491af4.jpeg" style="width: 70%; min-width: 300px; display: block; margin: auto;">
An experimental merging of Several Models using two various methods, [Ties-Merge](https://github.com/cg123/ties-merge) and [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient)
I plan for this to be the base of my Model with my own [Stheno: ERP-Based LORA] merged in, some time in the future.
Stheno:
<br>Gradient Merge of Stheno-P1 & Stheno-P2.
SISTER MODEL HERE: [Stheno-Inverted-L2-13B](https://huggingface.co/Sao10K/Stheno-Inverted-L2-13B)
Quants courtesy of TheBloke!
<br>[GPTQ](https://huggingface.co/TheBloke/Stheno-L2-13B-GPTQ)
<br>[GGUF](https://huggingface.co/TheBloke/Stheno-L2-13B-GGUF)
<br>[GGML](https://huggingface.co/TheBloke/Stheno-L2-13B-GGML)
Test Checklist:
<br>Censorship - Fairly Uncensored
<br>Writing - Good Prose, Fairly Descriptive
<br>NSFW - Yes
<br>IQ Level - Pretty Smart
<br>Formatting - Proper Formatting with Examples
Stheno-P1 [Ties-Merge]
<br>-----[elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2)
<br>-----[jondurbin/airoboros-l2-13b-2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1)
<br>-----[NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)+[nRuaif/Kimiko-v2 **LORA**](https://huggingface.co/nRuaif/Kimiko-v2-13B)
Stheno-P2 [Ties-Merge]
<br>-----[CalderaAI/13B-Legerdemain-L2](https://huggingface.co/CalderaAI/13B-Legerdemain-L2)+[lemonilia/limarp-llama2-v2 **LORA**](https://huggingface.co/lemonilia/limarp-llama2-v2)
<br>-----[ehartford/WizardLM-1.0-Uncensored-Llama2-13b](https://huggingface.co/ehartford/WizardLM-1.0-Uncensored-Llama2-13b)
<br>-----[Henk717/spring-dragon](https://huggingface.co/Henk717/spring-dragon)
Most formats could work, but my tests have all been done in Alpaca format and it works well.
```
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
Below is the Illustration for the Final Merge:

Once Again, thanks to [Chargoddard](https://huggingface.co/chargoddard) for his amazing and simple [ties-merge](https://github.com/cg123/ties-merge) script, and [Gryphe](https://huggingface.co/Gryphe) for their great [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) script.
Thanks to the original model creators too!
```
Art by wada_kazu / わだかず (pixiv page private?)
``` |
DriveMyScream/Face_Image_Segementation | DriveMyScream | 2023-09-15T20:18:08Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
]
| null | 2023-09-15T19:26:54Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
ncdisrup-ai/test_trainer | ncdisrup-ai | 2023-09-15T20:12:07Z | 105 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"en",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-15T17:32:02Z | ---
license: apache-2.0
datasets:
- imdb
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
--- |
holtschn/heman-toy-lora-trained-sdxl | holtschn | 2023-09-15T20:05:54Z | 3 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-09-14T21:17:42Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of he-man
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - holtschn/heman-toy-lora-trained-sdxl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on photo of he-man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
microsoft/swin-tiny-patch4-window7-224 | microsoft | 2023-09-15T19:59:37Z | 501,711 | 43 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"swin",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2103.14030",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (tiny-sized model)
Swin Transformer model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
model = AutoModelForImageClassification.from_pretrained("microsoft/swin-tiny-patch4-window7-224")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
DriveMyScream/Image_SuperResolution | DriveMyScream | 2023-09-15T19:59:32Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
]
| null | 2023-09-15T19:25:52Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
AmrMorgado/ppo-Huggy | AmrMorgado | 2023-09-15T19:55:20Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-09-15T19:55:14Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AmrMorgado/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
InexperiencedMe/ppo-Pyramids | InexperiencedMe | 2023-09-15T19:50:26Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-09-15T19:50:23Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: InexperiencedMe/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bigscience/test-bloomd | bigscience | 2023-09-15T19:43:20Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-06-24T16:04:45Z | This is an utility repo for testing inference methods. Please use [bigscience/bloom](https://huggingface.co/bigscience/bloom) to access the latest model. |
ahsan-mavros/balanced-genai-training | ahsan-mavros | 2023-09-15T19:39:25Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-15T19:32:51Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: balanced-genai-training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# balanced-genai-training
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0895
- Rouge1: 97.1196
- Rouge2: 88.8856
- Rougel: 97.1174
- Rougelsum: 97.1196
- Gen Len: 5.3088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.1037 | 1.0 | 1226 | 0.0895 | 97.1196 | 88.8856 | 97.1174 | 97.1196 | 5.3088 |
### Framework versions
- Transformers 4.33.1
- Pytorch 1.12.0+cu102
- Datasets 2.14.5
- Tokenizers 0.13.3
|
heroisclub/superia | heroisclub | 2023-09-15T19:25:18Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-15T19:25:18Z | ---
license: creativeml-openrail-m
---
|
davera-017/Pixelcopter-PLE-v5 | davera-017 | 2023-09-15T19:12:43Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T19:12:38Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v5
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 17.30 +/- 14.28
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
felixquinihildebet/PPO_agent | felixquinihildebet | 2023-09-15T19:04:36Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T18:15:41Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.35 +/- 27.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SadiulArefin/flan-t5-xlsum | SadiulArefin | 2023-09-15T18:50:35Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xlsum",
"base_model:SadiulArefin/flan-t5-xlsum",
"base_model:finetune:SadiulArefin/flan-t5-xlsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-14T16:51:58Z | ---
license: apache-2.0
base_model: SadiulArefin/flan-t5-xlsum
tags:
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: flan-t5-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-xlsum
This model is a fine-tuned version of [SadiulArefin/flan-t5-xlsum](https://huggingface.co/SadiulArefin/flan-t5-xlsum) on the xlsum dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.8182
- eval_runtime: 293.1995
- eval_samples_per_second: 39.342
- eval_steps_per_second: 4.918
- epoch: 1.0
- step: 10000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
thebitanpaul/midjourneyPromptGenerator | thebitanpaul | 2023-09-15T18:40:51Z | 0 | 2 | null | [
"region:us"
]
| null | 2023-09-15T11:49:58Z |
# Midjourney Prompt Generator
This Midjourney prompt generator makes digital creators life easier by generating some specific prompts for Midjourney which enables them to generate more accurate and realistic images as per their needs.
## Use Midjouney Prompt Generator
You can make a copy of this collab notebook to use my Midjourney Prompt Generator: https://drive.google.com/file/d/1gyeQZGuu18LoX3rZKIUBTrDhhXM5TBfZ/view?usp=sharing
## Documentation
This repository contains a fine-tuned Falcon 7b Language Model (LLM) designed for generating realistic Midjourney prompts from simple instructions. With this model, you can effortlessly obtain detailed prompts for your projects. Just provide a straightforward instruction, and let Falcon 7b LLM provide you with the creative and technical prompts you need. Improve your creative writing, brainstorm ideas, and enhance your project development process. Explore the power of Falcon 7b LLM for your Midjourney needs.
## Tech Stack
**NoteBook:** Google Collab
**LLM Model:** Falcon 7b
**Data Set Generator:** RelevanceAI
**Deep Learning Model:** Transformer
**VCS:** GitHub
**Model saved at:** Hugging Face
## Demo
https://github.com/thebitanpaul/movie-guide/assets/99794785/ffe53cb3-0a57-477b-90dd-df4a9420de63
## Key Features
- Put any simple instruction and the finetuned Falcon 7b LLM model will provide you some detailed prompts to generate realistic Midjourney results.
## OutPut


## Lessons Learned
- This app made me more confident in Large Language Models (LLM).
- Learned how LLM models are fine tuned for more domain specific usage.
- I learned how to use Hugging Face and RelevanceAi for data set creation.
- I have also learned how Transformers work in deep learning.
- Also learned various implementations of Falcon 7b.
## About Me
I am an AI and Machine Learning enthusiast & growing Android Developer with some keen interest in Data Analytics and LLM.
I have worked on Android Studio, MySQL workbench, Microsoft Power Automate, Azure Cloud, platforms.
## 🔗 Links
[](https://www.linkedin.com/in/thebitanpaul)
[](https://twitter.com/thebitanpaul_)
---
license: apache-2.0
---
|
bartmiller/ppo-LunarLander-v2 | bartmiller | 2023-09-15T18:34:04Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T18:33:45Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.30 +/- 25.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PHL99/poca-SoccerTwos | PHL99 | 2023-09-15T18:31:30Z | 56 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-09-15T18:30:58Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: PHL99/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fetiska/mr.Balance | fetiska | 2023-09-15T18:29:29Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T18:29:19Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: mr.Balance
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
STomoya/resnet101.st_safebooru_1k | STomoya | 2023-09-15T18:29:24Z | 15 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"license:apache-2.0",
"region:us"
]
| image-classification | 2023-09-15T18:28:32Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
---
# Model card for resnet101.st_safebooru_1k
## Model Details
- **metrics:**
|Precision|Recall|F1-score|
|-|-|-|
|0.7964836031495733|0.43972587789142964|0.5411210968446176|
|
bryandts/image_classification_face | bryandts | 2023-09-15T18:26:53Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-15T17:15:56Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification_face
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification_face
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1157
- Accuracy: 0.625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.6266 | 0.475 |
| No log | 2.0 | 80 | 1.3303 | 0.5375 |
| No log | 3.0 | 120 | 1.2399 | 0.525 |
| No log | 4.0 | 160 | 1.1779 | 0.5563 |
| No log | 5.0 | 200 | 1.1825 | 0.55 |
| No log | 6.0 | 240 | 1.1564 | 0.5875 |
| No log | 7.0 | 280 | 1.1258 | 0.6125 |
| No log | 8.0 | 320 | 1.1154 | 0.625 |
| No log | 9.0 | 360 | 1.1169 | 0.6062 |
| No log | 10.0 | 400 | 1.1155 | 0.625 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
gmongaras/Wizard_7B_Squad_v2 | gmongaras | 2023-09-15T18:20:30Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-15T17:36:25Z | ---
license: openrail
---
Model from: https://huggingface.co/TheBloke/wizardLM-7B-HF/tree/main
Trained on: https://huggingface.co/datasets/squad
Model trained for 6000 steps, batch size of 8, 2 accumulations steps. |
ukeme/sgservices-base-sentence-transformer | ukeme | 2023-09-15T18:12:25Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:embedding-data/sentence-compression",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-09-03T11:11:53Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- embedding-data/sentence-compression
---
# ukeme/sgservices-base-sentence-transformer
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ukeme/sgservices-base-sentence-transformer')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ukeme/sgservices-base-sentence-transformer')
model = AutoModel.from_pretrained('ukeme/sgservices-base-sentence-transformer')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ukeme/sgservices-base-sentence-transformer)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
dimfeld/BioLinkBERT-large-feat | dimfeld | 2023-09-15T18:10:50Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"exbert",
"linkbert",
"biolinkbert",
"fill-mask",
"question-answering",
"text-classification",
"token-classification",
"en",
"dataset:pubmed",
"arxiv:2203.15827",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2023-05-19T20:54:06Z | ---
license: apache-2.0
language: en
datasets:
- pubmed
tags:
- bert
- exbert
- linkbert
- biolinkbert
- feature-extraction
- fill-mask
- question-answering
- text-classification
- token-classification
widget:
- text: Sunitinib is a tyrosine kinase inhibitor
duplicated_from: michiyasunaga/BioLinkBERT-large
pipeline_tag: feature-extraction
---
## BioLinkBERT-large
**This is identical to `michiyasunaga/BioLinkBERT-large` except the pipeline tag in the model card was changed to feature-extraction.**
BioLinkBERT-large model pretrained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts along with citation link information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT).
This model achieves state-of-the-art performance on several biomedical NLP benchmarks such as [BLURB](https://microsoft.github.io/BLURB/) and [MedQA-USMLE](https://github.com/jind11/MedQA).
## Model description
LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document.
LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval).
## Intended uses & limitations
The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification.
You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text).
### How to use
To use the model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/BioLinkBERT-large')
model = AutoModel.from_pretrained('michiyasunaga/BioLinkBERT-large')
inputs = tokenizer("Sunitinib is a tyrosine kinase inhibitor", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases.
## Evaluation results
When fine-tuned on downstream tasks, LinkBERT achieves the following results.
**Biomedical benchmarks ([BLURB](https://microsoft.github.io/BLURB/), [MedQA](https://github.com/jind11/MedQA), [MMLU](https://github.com/hendrycks/test), etc.):** BioLinkBERT attains new state-of-the-art.
| | BLURB score | PubMedQA | BioASQ | MedQA-USMLE |
| ---------------------- | -------- | -------- | ------- | -------- |
| PubmedBERT-base | 81.10 | 55.8 | 87.5 | 38.1 |
| **BioLinkBERT-base** | **83.39** | **70.2** | **91.4** | **40.0** |
| **BioLinkBERT-large** | **84.30** | **72.2** | **94.8** | **44.6** |
| | MMLU-professional medicine |
| ---------------------- | -------- |
| GPT-3 (175 params) | 38.7 |
| UnifiedQA (11B params) | 43.2 |
| **BioLinkBERT-large (340M params)** | **50.7** |
## Citation
If you find LinkBERT useful in your project, please cite the following:
```bibtex
@InProceedings{yasunaga2022linkbert,
author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang},
title = {LinkBERT: Pretraining Language Models with Document Links},
year = {2022},
booktitle = {Association for Computational Linguistics (ACL)},
}
``` |
jfriduss/bert_for_job_descr_parsing | jfriduss | 2023-09-15T18:01:37Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:jjzha/jobbert_knowledge_extraction",
"base_model:finetune:jjzha/jobbert_knowledge_extraction",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-15T18:00:12Z | ---
base_model: jjzha/jobbert_knowledge_extraction
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tok_train_info
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tok_train_info
This model is a fine-tuned version of [jjzha/jobbert_knowledge_extraction](https://huggingface.co/jjzha/jobbert_knowledge_extraction) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2616
- Precision: 0.5755
- Recall: 0.5980
- F1: 0.5865
- Accuracy: 0.9072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 20 | 0.4390 | 0.3790 | 0.4608 | 0.4159 | 0.8845 |
| No log | 2.0 | 40 | 0.2831 | 0.5321 | 0.5686 | 0.5498 | 0.9034 |
| No log | 3.0 | 60 | 0.2616 | 0.5755 | 0.5980 | 0.5865 | 0.9072 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/sagisawa_fumika_idolmastercinderellagirls | CyberHarem | 2023-09-15T17:49:52Z | 0 | 1 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/sagisawa_fumika_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-15T17:32:25Z | ---
license: mit
datasets:
- CyberHarem/sagisawa_fumika_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of sagisawa_fumika_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5500, you need to download `5500/sagisawa_fumika_idolmastercinderellagirls.pt` as the embedding and `5500/sagisawa_fumika_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5500**, with the score of 0.962. The trigger words are:
1. `sagisawa_fumika_idolmastercinderellagirls`
2. `long_hair, blue_eyes, black_hair, blush, hairband, breasts, large_breasts, hair_between_eyes, bangs, jewelry, collarbone`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7500 | 0.948 | [Download](7500/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7500/previews/bikini.png) | [<NSFW, click to see>](7500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) |  |  |
| 7000 | 0.942 | [Download](7000/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7000/previews/bikini.png) | [<NSFW, click to see>](7000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) |  |  |
| 6500 | 0.951 | [Download](6500/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6500/previews/bikini.png) | [<NSFW, click to see>](6500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) |  |  |
| 6000 | 0.947 | [Download](6000/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bikini.png) | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| **5500** | **0.962** | [**Download**](5500/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5500/previews/bikini.png) | [<NSFW, click to see>](5500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) |  |  |
| 5000 | 0.940 | [Download](5000/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5000/previews/bikini.png) | [<NSFW, click to see>](5000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) |  |  |
| 4500 | 0.946 | [Download](4500/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4500/previews/bikini.png) | [<NSFW, click to see>](4500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) |  |  |
| 4000 | 0.943 | [Download](4000/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bikini.png) | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| 3500 | 0.945 | [Download](3500/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3500/previews/bikini.png) | [<NSFW, click to see>](3500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) |  |  |
| 3000 | 0.939 | [Download](3000/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3000/previews/bikini.png) | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2500 | 0.907 | [Download](2500/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2500/previews/bikini.png) | [<NSFW, click to see>](2500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) |  |  |
| 2000 | 0.925 | [Download](2000/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bikini.png) | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1500 | 0.926 | [Download](1500/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) |  |  |
| 1000 | 0.897 | [Download](1000/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) |  |  |
| 500 | 0.935 | [Download](500/sagisawa_fumika_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) |  |  |
|
imvladikon/wav2vec2-xls-r-300m-lm-hebrew | imvladikon | 2023-09-15T17:46:17Z | 18 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"he",
"robust-speech-event",
"dataset:imvladikon/hebrew_speech_kan",
"dataset:imvladikon/hebrew_speech_coursera",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- he
license: apache-2.0
tags:
- generated_from_trainer
- he
- robust-speech-event
datasets:
- imvladikon/hebrew_speech_kan
- imvladikon/hebrew_speech_coursera
metrics:
- wer
base_model: facebook/wav2vec2-xls-r-300m
model-index:
- name: wav2vec2-xls-r-300m-lm-hebrew
results: []
---
# wav2vec2-xls-r-300m-lm-hebrew
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset
with adding ngram models according to [Boosting Wav2Vec2 with n-grams in 🤗 Transformers](https://huggingface.co/blog/wav2vec2-with-ngram)
## Usage
check package: https://github.com/imvladikon/wav2vec2-hebrew
or use transformers pipeline:
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "imvladikon/wav2vec2-xls-r-300m-lm-hebrew"
sample_iter = iter(load_dataset("google/fleurs", "he_il", split="test", streaming=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), sample["audio"]["sampling_rate"], 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
print(transcription)
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0 |
achimoraites/roberta-base_ag_news | achimoraites | 2023-09-15T17:35:35Z | 20 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:ag_news",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-23T20:55:04Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- ag_news
widget:
- text: Oil and Economy Cloud Stocks' Outlook (Reuters) Reuters - Soaring crude prices
plus worries\about the economy and the outlook for earnings are expected to\hang
over the stock market next week during the depth of the\summer doldrums
- text: Prediction Unit Helps Forecast Wildfires (AP) AP - It's barely dawn when Mike
Fitzpatrick starts his shift with a blur of colorful maps, figures and endless
charts, but already he knows what the day will bring. Lightning will strike in
places he expects. Winds will pick up, moist places will dry and flames will roar
- text: Venezuelans Flood Polls, Voting Extended CARACAS, Venezuela (Reuters) - Venezuelans
voted in huge numbers on Sunday in a historic referendum on whether to recall
left-wing President Hugo Chavez and electoral authorities prolonged voting well
into the night.
pipeline_tag: text-classification
base_model: roberta-base
model-index:
- name: roberta-base_ag_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_ag_news
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3692 | 1.0 | 7500 | 0.4305 |
| 1.6035 | 2.0 | 15000 | 1.8071 |
| 0.6766 | 3.0 | 22500 | 0.4494 |
| 0.3733 | 4.0 | 30000 | 0.3943 |
| 0.2483 | 5.0 | 37500 | 0.3583 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2 |
kalhosni/llama2finetune-v1 | kalhosni | 2023-09-15T17:11:37Z | 5 | 0 | adapter-transformers | [
"adapter-transformers",
"tensorboard",
"autotrain",
"text-generation",
"en",
"dataset:aboonaji/alpaca_micro_demo",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-09-15T16:27:18Z | ---
tags:
- autotrain
- text-generation
widget:
- text: 'I love AutoTrain because '
license: apache-2.0
datasets:
- aboonaji/alpaca_micro_demo
language:
- en
library_name: adapter-transformers
---
# Model Trained Using AutoTrain
Code for Training:
from transformers import AutoTokenizer
import transformers
import torch
model = "meta-llama/Llama-2-7b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'I liked "Breaking Bad" and "Band of Brothers". Do you have any recommendations of other shows I might like?\n',
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
|
ys7yoo/sts_klue_roberta_large_ep9 | ys7yoo | 2023-09-15T17:11:18Z | 93 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:klue/roberta-large",
"base_model:finetune:klue/roberta-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-15T16:38:52Z | ---
base_model: klue/roberta-large
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: sts_klue_roberta_large_ep9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sts_klue_roberta_large_ep9
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3567
- Mse: 0.3567
- Mae: 0.4407
- R2: 0.8367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 1.3093 | 1.0 | 183 | 0.4915 | 0.4915 | 0.5401 | 0.7750 |
| 0.2188 | 2.0 | 366 | 0.4399 | 0.4399 | 0.4982 | 0.7986 |
| 0.1327 | 3.0 | 549 | 0.4022 | 0.4022 | 0.4647 | 0.8158 |
| 0.1043 | 4.0 | 732 | 0.4094 | 0.4094 | 0.4680 | 0.8125 |
| 0.074 | 5.0 | 915 | 0.4218 | 0.4218 | 0.4784 | 0.8069 |
| 0.0552 | 6.0 | 1098 | 0.3424 | 0.3424 | 0.4356 | 0.8432 |
| 0.0394 | 7.0 | 1281 | 0.3925 | 0.3925 | 0.4691 | 0.8203 |
| 0.031 | 8.0 | 1464 | 0.3723 | 0.3723 | 0.4510 | 0.8295 |
| 0.0234 | 9.0 | 1647 | 0.3567 | 0.3567 | 0.4407 | 0.8367 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
kear24100712/piconai321 | kear24100712 | 2023-09-15T17:06:43Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-13T22:43:39Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: piconia321
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
facebook/dinov2-large-imagenet1k-1-layer | facebook | 2023-09-15T16:37:58Z | 1,370 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"dinov2",
"image-classification",
"dino",
"vision",
"dataset:imagenet-1k",
"arxiv:2304.07193",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-14T20:04:10Z | ---
license: apache-2.0
tags:
- dino
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (large-sized model) trained using DINOv2
Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al. and first released in [this repository](https://github.com/facebookresearch/dinov2).
Disclaimer: The team releasing DINOv2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion.
Images are presented to the model as a sequence of fixed-size patches, which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the model for classifying an image among one of the [1000 ImageNet labels](https://huggingface.co/datasets/huggingface/label-files/blob/main/imagenet-1k-id2label.json). See the [model hub](https://huggingface.co/models?search=facebook/dinov2) to look for
other fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-large-imagenet1k-1-layer')
model = AutoModelForImageClassification.from_pretrained('facebook/dinov2-large-imagenet1k-1-layer')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
### BibTeX entry and citation info
```bibtex
misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski},
year={2023},
eprint={2304.07193},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
ys7yoo/nli_sts_klue_roberta_large_ep1_ep1 | ys7yoo | 2023-09-15T16:34:55Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:ys7yoo/nli_klue_roberta_large_ep1",
"base_model:finetune:ys7yoo/nli_klue_roberta_large_ep1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-15T16:22:46Z | ---
base_model: ys7yoo/nli_klue_roberta_large_ep1
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: sts_ys7yoo_nli_klue_roberta_large_ep1_ep1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sts_ys7yoo_nli_klue_roberta_large_ep1_ep1
This model is a fine-tuned version of [ys7yoo/nli_klue_roberta_large_ep1](https://huggingface.co/ys7yoo/nli_klue_roberta_large_ep1) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4267
- Mse: 0.4267
- Mae: 0.4852
- R2: 0.8046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.5163 | 1.0 | 183 | 0.4267 | 0.4267 | 0.4852 | 0.8046 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
dracero/ppo-LunarLander-v2-16-08-52023 | dracero | 2023-09-15T16:25:23Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T16:25:01Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.46 +/- 23.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lobodav/dogbooth | lobodav | 2023-09-15T16:20:45Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-15T14:30:30Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of [v]dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - lobodav/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
SNavgale/donut-demo | SNavgale | 2023-09-15T16:20:07Z | 50 | 0 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"en",
"license:unlicense",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2023-09-15T12:07:21Z | ---
license: unlicense
language:
- en
--- |
c-g/ppo-LunarLander-v2 | c-g | 2023-09-15T16:18:29Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T16:18:07Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO-MlpPolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.53 +/- 17.16
name: mean_reward
verified: false
---
# **PPO-MlpPolicy** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-MlpPolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
stefan-it/umt5-small | stefan-it | 2023-09-15T16:08:35Z | 97 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"umt5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-04-06T08:56:25Z | ---
license: mit
---
# umT5 Small
The UMT5 model was proposed in [UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi)
by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant.
The abstract from the paper is the following:
*Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance
between different languages. However previous work has not systematically evaluated the efficacy of different
pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax,
that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly
capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a
range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax
outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our
contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters
across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.*
# Integration into Transformers
Overview of umT5 model integration:
* Transformers Integration is on-going, see this awesome [PR](https://github.com/huggingface/transformers/pull/22626) by @agemagician!
* Conversion script (umT5X checkpoints to FLAX) is [here](https://gist.github.com/stefan-it/5d6a4ec89e7ad97181983881434cb4eb). |
TinyPixel/qlora-main-2 | TinyPixel | 2023-09-15T15:56:14Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-11T00:27:37Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
elenaThevalley/mobilenet_v2_1.0_224-finetuned-32bs-0.1lr | elenaThevalley | 2023-09-15T15:46:28Z | 194 | 0 | transformers | [
"transformers",
"pytorch",
"mobilenet_v2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/mobilenet_v2_1.0_224",
"base_model:finetune:google/mobilenet_v2_1.0_224",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-13T07:59:23Z | ---
license: other
base_model: google/mobilenet_v2_1.0_224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: mobilenet_v2_1.0_224-finetuned-32bs-0.1lr
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6468862515002001
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilenet_v2_1.0_224-finetuned-32bs-0.1lr
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9270
- Accuracy: 0.6469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.99 | 53 | 1.3949 | 0.4049 |
| No log | 1.99 | 107 | 1.0455 | 0.5819 |
| No log | 2.96 | 159 | 0.9270 | 0.6469 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ashwincv0112/code-llama-python-finetune2 | ashwincv0112 | 2023-09-15T15:33:39Z | 1 | 0 | peft | [
"peft",
"pytorch",
"codegen",
"region:us"
]
| null | 2023-09-15T10:01:10Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
paturi1710/fb-detr-table_detection_v1.0 | paturi1710 | 2023-09-15T15:30:19Z | 214 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2023-09-15T13:23:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: fb-detr-table_detection_v1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fb-detr-table_detection_v1.0
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1131 | 1.21 | 20 | 1.3387 |
| 1.7233 | 2.42 | 40 | 1.1735 |
| 1.4974 | 3.64 | 60 | 1.0333 |
| 1.4395 | 4.85 | 80 | 1.0741 |
| 1.2497 | 6.06 | 100 | 0.7493 |
| 1.0696 | 7.27 | 120 | 0.6951 |
| 1.2718 | 8.48 | 140 | 0.7663 |
| 1.3003 | 9.7 | 160 | 0.9187 |
| 1.1703 | 10.91 | 180 | 0.6581 |
| 1.1463 | 12.12 | 200 | 0.6728 |
| 1.1198 | 13.33 | 220 | 0.6519 |
| 1.1313 | 14.55 | 240 | 0.6019 |
| 0.8707 | 15.76 | 260 | 0.5460 |
| 0.9215 | 16.97 | 280 | 0.5729 |
| 0.8017 | 18.18 | 300 | 0.5418 |
| 0.7221 | 19.39 | 320 | 0.5402 |
| 0.6872 | 20.61 | 340 | 0.5618 |
| 0.729 | 21.82 | 360 | 0.5744 |
| 0.7702 | 23.03 | 380 | 0.5305 |
| 0.7845 | 24.24 | 400 | 0.5043 |
| 0.7473 | 25.45 | 420 | 0.4903 |
| 0.7031 | 26.67 | 440 | 0.4830 |
| 0.6726 | 27.88 | 460 | 0.4640 |
| 0.6327 | 29.09 | 480 | 0.4662 |
| 0.6806 | 30.3 | 500 | 0.4619 |
| 0.6626 | 31.52 | 520 | 0.5005 |
| 0.6622 | 32.73 | 540 | 0.4601 |
| 0.7345 | 33.94 | 560 | 0.5567 |
| 0.7202 | 35.15 | 580 | 0.4721 |
| 0.6754 | 36.36 | 600 | 0.4950 |
| 0.608 | 37.58 | 620 | 0.4949 |
| 0.6812 | 38.79 | 640 | 0.4893 |
| 0.6648 | 40.0 | 660 | 0.5383 |
| 0.5884 | 41.21 | 680 | 0.4344 |
| 0.5823 | 42.42 | 700 | 0.4617 |
| 0.6158 | 43.64 | 720 | 0.4269 |
| 0.5702 | 44.85 | 740 | 0.4209 |
| 0.6794 | 46.06 | 760 | 0.4438 |
| 0.6795 | 47.27 | 780 | 0.4777 |
| 0.661 | 48.48 | 800 | 0.4214 |
| 0.6217 | 49.7 | 820 | 0.4380 |
| 0.6664 | 50.91 | 840 | 0.4573 |
| 0.5767 | 52.12 | 860 | 0.4435 |
| 0.5596 | 53.33 | 880 | 0.4772 |
| 0.5907 | 54.55 | 900 | 0.4336 |
| 0.56 | 55.76 | 920 | 0.4219 |
| 0.566 | 56.97 | 940 | 0.4606 |
| 0.5551 | 58.18 | 960 | 0.4153 |
| 0.5454 | 59.39 | 980 | 0.4567 |
| 0.5452 | 60.61 | 1000 | 0.4702 |
| 0.6073 | 61.82 | 1020 | 0.4247 |
| 0.5517 | 63.03 | 1040 | 0.4300 |
| 0.5351 | 64.24 | 1060 | 0.4356 |
| 0.532 | 65.45 | 1080 | 0.3722 |
| 0.5638 | 66.67 | 1100 | 0.3627 |
| 0.5537 | 67.88 | 1120 | 0.4079 |
| 0.5007 | 69.09 | 1140 | 0.3965 |
| 0.5202 | 70.3 | 1160 | 0.3760 |
| 0.5156 | 71.52 | 1180 | 0.4091 |
| 0.5396 | 72.73 | 1200 | 0.3823 |
| 0.5092 | 73.94 | 1220 | 0.3866 |
| 0.4667 | 75.15 | 1240 | 0.3713 |
| 0.4725 | 76.36 | 1260 | 0.3536 |
| 0.4835 | 77.58 | 1280 | 0.3421 |
| 0.4999 | 78.79 | 1300 | 0.3294 |
| 0.4983 | 80.0 | 1320 | 0.3866 |
| 0.4917 | 81.21 | 1340 | 0.3061 |
| 0.502 | 82.42 | 1360 | 0.3908 |
| 0.5435 | 83.64 | 1380 | 0.3587 |
| 0.4925 | 84.85 | 1400 | 0.3662 |
| 0.469 | 86.06 | 1420 | 0.3547 |
| 0.4184 | 87.27 | 1440 | 0.3229 |
| 0.4232 | 88.48 | 1460 | 0.3478 |
| 0.3962 | 89.7 | 1480 | 0.3286 |
| 0.4217 | 90.91 | 1500 | 0.3668 |
| 0.427 | 92.12 | 1520 | 0.3554 |
| 0.4433 | 93.33 | 1540 | 0.3214 |
| 0.4304 | 94.55 | 1560 | 0.3243 |
| 0.4353 | 95.76 | 1580 | 0.2909 |
| 0.4153 | 96.97 | 1600 | 0.3032 |
| 0.3819 | 98.18 | 1620 | 0.2858 |
| 0.3911 | 99.39 | 1640 | 0.2721 |
| 0.3513 | 100.61 | 1660 | 0.2763 |
| 0.3266 | 101.82 | 1680 | 0.2538 |
| 0.3222 | 103.03 | 1700 | 0.2543 |
| 0.3326 | 104.24 | 1720 | 0.2548 |
| 0.3219 | 105.45 | 1740 | 0.2737 |
| 0.3313 | 106.67 | 1760 | 0.2381 |
| 0.3557 | 107.88 | 1780 | 0.2728 |
| 0.3312 | 109.09 | 1800 | 0.2784 |
| 0.3206 | 110.3 | 1820 | 0.2462 |
| 0.3015 | 111.52 | 1840 | 0.2587 |
| 0.2903 | 112.73 | 1860 | 0.2411 |
| 0.2853 | 113.94 | 1880 | 0.2533 |
| 0.2917 | 115.15 | 1900 | 0.2662 |
| 0.2802 | 116.36 | 1920 | 0.2491 |
| 0.2774 | 117.58 | 1940 | 0.2523 |
| 0.2848 | 118.79 | 1960 | 0.2426 |
| 0.2813 | 120.0 | 1980 | 0.2339 |
| 0.2752 | 121.21 | 2000 | 0.2444 |
| 0.2804 | 122.42 | 2020 | 0.2231 |
| 0.2456 | 123.64 | 2040 | 0.2174 |
| 0.2689 | 124.85 | 2060 | 0.2136 |
| 0.252 | 126.06 | 2080 | 0.2257 |
| 0.2498 | 127.27 | 2100 | 0.2311 |
| 0.2404 | 128.48 | 2120 | 0.2260 |
| 0.2608 | 129.7 | 2140 | 0.2256 |
| 0.2332 | 130.91 | 2160 | 0.2135 |
| 0.2345 | 132.12 | 2180 | 0.2229 |
| 0.2558 | 133.33 | 2200 | 0.2022 |
| 0.2228 | 134.55 | 2220 | 0.2115 |
| 0.2269 | 135.76 | 2240 | 0.2069 |
| 0.2264 | 136.97 | 2260 | 0.2124 |
| 0.2151 | 138.18 | 2280 | 0.2117 |
| 0.2375 | 139.39 | 2300 | 0.1976 |
| 0.2231 | 140.61 | 2320 | 0.2047 |
| 0.2157 | 141.82 | 2340 | 0.2107 |
| 0.2307 | 143.03 | 2360 | 0.1989 |
| 0.2097 | 144.24 | 2380 | 0.2077 |
| 0.2134 | 145.45 | 2400 | 0.2234 |
| 0.1975 | 146.67 | 2420 | 0.2179 |
| 0.2087 | 147.88 | 2440 | 0.2019 |
| 0.2029 | 149.09 | 2460 | 0.2041 |
| 0.2038 | 150.3 | 2480 | 0.2036 |
| 0.2202 | 151.52 | 2500 | 0.1984 |
| 0.203 | 152.73 | 2520 | 0.1943 |
| 0.2201 | 153.94 | 2540 | 0.2064 |
| 0.1868 | 155.15 | 2560 | 0.2126 |
| 0.2185 | 156.36 | 2580 | 0.2131 |
| 0.1917 | 157.58 | 2600 | 0.2031 |
| 0.1898 | 158.79 | 2620 | 0.2009 |
| 0.1923 | 160.0 | 2640 | 0.2170 |
| 0.1865 | 161.21 | 2660 | 0.2068 |
| 0.1971 | 162.42 | 2680 | 0.2053 |
| 0.1942 | 163.64 | 2700 | 0.2011 |
| 0.1902 | 164.85 | 2720 | 0.1993 |
| 0.1817 | 166.06 | 2740 | 0.1952 |
| 0.1837 | 167.27 | 2760 | 0.2222 |
| 0.1835 | 168.48 | 2780 | 0.2173 |
| 0.1923 | 169.7 | 2800 | 0.2072 |
| 0.1798 | 170.91 | 2820 | 0.2069 |
| 0.1815 | 172.12 | 2840 | 0.2078 |
| 0.1724 | 173.33 | 2860 | 0.2183 |
| 0.1924 | 174.55 | 2880 | 0.2005 |
| 0.1922 | 175.76 | 2900 | 0.2069 |
| 0.1709 | 176.97 | 2920 | 0.2212 |
| 0.1766 | 178.18 | 2940 | 0.1978 |
| 0.1728 | 179.39 | 2960 | 0.2029 |
| 0.1757 | 180.61 | 2980 | 0.2030 |
| 0.1665 | 181.82 | 3000 | 0.2219 |
| 0.1694 | 183.03 | 3020 | 0.2205 |
| 0.1786 | 184.24 | 3040 | 0.2020 |
| 0.1749 | 185.45 | 3060 | 0.2007 |
| 0.1739 | 186.67 | 3080 | 0.2046 |
| 0.1723 | 187.88 | 3100 | 0.1986 |
| 0.1669 | 189.09 | 3120 | 0.2041 |
| 0.1658 | 190.3 | 3140 | 0.2179 |
| 0.1701 | 191.52 | 3160 | 0.2159 |
| 0.1691 | 192.73 | 3180 | 0.2099 |
| 0.1739 | 193.94 | 3200 | 0.1996 |
| 0.1729 | 195.15 | 3220 | 0.2126 |
| 0.1636 | 196.36 | 3240 | 0.2080 |
| 0.1612 | 197.58 | 3260 | 0.2154 |
| 0.1653 | 198.79 | 3280 | 0.2031 |
| 0.1629 | 200.0 | 3300 | 0.2206 |
| 0.1565 | 201.21 | 3320 | 0.2223 |
| 0.1632 | 202.42 | 3340 | 0.2122 |
| 0.1689 | 203.64 | 3360 | 0.1986 |
| 0.1682 | 204.85 | 3380 | 0.2092 |
| 0.1671 | 206.06 | 3400 | 0.2309 |
| 0.175 | 207.27 | 3420 | 0.2129 |
| 0.1607 | 208.48 | 3440 | 0.2393 |
| 0.165 | 209.7 | 3460 | 0.2125 |
| 0.1593 | 210.91 | 3480 | 0.2304 |
| 0.1594 | 212.12 | 3500 | 0.2325 |
| 0.1471 | 213.33 | 3520 | 0.2341 |
| 0.1598 | 214.55 | 3540 | 0.2175 |
| 0.1542 | 215.76 | 3560 | 0.2162 |
| 0.1602 | 216.97 | 3580 | 0.2277 |
| 0.1577 | 218.18 | 3600 | 0.2117 |
| 0.1625 | 219.39 | 3620 | 0.2118 |
| 0.1517 | 220.61 | 3640 | 0.2252 |
| 0.1545 | 221.82 | 3660 | 0.2129 |
| 0.152 | 223.03 | 3680 | 0.2216 |
| 0.161 | 224.24 | 3700 | 0.2169 |
| 0.1509 | 225.45 | 3720 | 0.2225 |
| 0.1502 | 226.67 | 3740 | 0.2339 |
| 0.1542 | 227.88 | 3760 | 0.2199 |
| 0.145 | 229.09 | 3780 | 0.2270 |
| 0.1499 | 230.3 | 3800 | 0.2189 |
| 0.1506 | 231.52 | 3820 | 0.2227 |
| 0.1556 | 232.73 | 3840 | 0.2260 |
| 0.1454 | 233.94 | 3860 | 0.2213 |
| 0.1472 | 235.15 | 3880 | 0.2159 |
| 0.1437 | 236.36 | 3900 | 0.2256 |
| 0.1448 | 237.58 | 3920 | 0.2278 |
| 0.1536 | 238.79 | 3940 | 0.2288 |
| 0.1446 | 240.0 | 3960 | 0.2400 |
| 0.1593 | 241.21 | 3980 | 0.2284 |
| 0.1463 | 242.42 | 4000 | 0.2258 |
| 0.1472 | 243.64 | 4020 | 0.2263 |
| 0.1455 | 244.85 | 4040 | 0.2285 |
| 0.1442 | 246.06 | 4060 | 0.2250 |
| 0.1499 | 247.27 | 4080 | 0.2318 |
| 0.1485 | 248.48 | 4100 | 0.2238 |
| 0.1545 | 249.7 | 4120 | 0.2257 |
| 0.1296 | 250.91 | 4140 | 0.2396 |
| 0.1425 | 252.12 | 4160 | 0.2377 |
| 0.1441 | 253.33 | 4180 | 0.2390 |
| 0.1343 | 254.55 | 4200 | 0.2389 |
| 0.1445 | 255.76 | 4220 | 0.2244 |
| 0.1445 | 256.97 | 4240 | 0.2299 |
| 0.1429 | 258.18 | 4260 | 0.2209 |
| 0.1479 | 259.39 | 4280 | 0.2221 |
| 0.1429 | 260.61 | 4300 | 0.2372 |
| 0.1452 | 261.82 | 4320 | 0.2357 |
| 0.1501 | 263.03 | 4340 | 0.2370 |
| 0.1404 | 264.24 | 4360 | 0.2311 |
| 0.1314 | 265.45 | 4380 | 0.2454 |
| 0.1498 | 266.67 | 4400 | 0.2243 |
| 0.1418 | 267.88 | 4420 | 0.2243 |
| 0.1453 | 269.09 | 4440 | 0.2258 |
| 0.1378 | 270.3 | 4460 | 0.2300 |
| 0.1442 | 271.52 | 4480 | 0.2269 |
| 0.1463 | 272.73 | 4500 | 0.2249 |
| 0.1352 | 273.94 | 4520 | 0.2262 |
| 0.1419 | 275.15 | 4540 | 0.2333 |
| 0.1326 | 276.36 | 4560 | 0.2358 |
| 0.1373 | 277.58 | 4580 | 0.2256 |
| 0.1317 | 278.79 | 4600 | 0.2295 |
| 0.1367 | 280.0 | 4620 | 0.2371 |
| 0.1346 | 281.21 | 4640 | 0.2352 |
| 0.1357 | 282.42 | 4660 | 0.2300 |
| 0.1372 | 283.64 | 4680 | 0.2414 |
| 0.1298 | 284.85 | 4700 | 0.2417 |
| 0.1368 | 286.06 | 4720 | 0.2269 |
| 0.1447 | 287.27 | 4740 | 0.2312 |
| 0.1394 | 288.48 | 4760 | 0.2339 |
| 0.1258 | 289.7 | 4780 | 0.2399 |
| 0.1427 | 290.91 | 4800 | 0.2380 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.11.0
|
semaj83/whisper-tiny-en-US | semaj83 | 2023-09-15T15:28:19Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-15T07:19:11Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en-US
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3695395513577332
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en-US
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9348
- Wer Ortho: 0.3683
- Wer: 0.3695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0 | 17.24 | 500 | 0.9155 | 0.3646 | 0.3654 |
| 0.0 | 34.48 | 1000 | 0.9348 | 0.3683 | 0.3695 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bk6000/rl_course_vizdoom_health_gathering_supreme | bk6000 | 2023-09-15T15:23:47Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-14T18:56:13Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 4.42 +/- 0.87
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r bk6000/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
sagarsdesai/poca-SoccerTwos | sagarsdesai | 2023-09-15T15:23:02Z | 9 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-09-15T15:19:11Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sagarsdesai/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ruhul0/dreambooth | ruhul0 | 2023-09-15T15:16:59Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-15T13:13:58Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Aesthetic Headshot for linkedin
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
muhtasham/bert-tiny-finetuned-finer | muhtasham | 2023-09-15T15:13:40Z | 118 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:finer-139",
"dataset:nlpaueb/finer-139",
"base_model:google/bert_uncased_L-2_H-128_A-2",
"base_model:finetune:google/bert_uncased_L-2_H-128_A-2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-07-25T01:20:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- finer-139
- nlpaueb/finer-139
metrics:
- precision
- recall
- f1
- accuracy
base_model: google/bert_uncased_L-2_H-128_A-2
model-index:
- name: bertiny-finetuned-finer
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: finer-139
type: finer-139
args: finer-139
metrics:
- type: precision
value: 0.5339285714285714
name: Precision
- type: recall
value: 0.036011080332409975
name: Recall
- type: f1
value: 0.06747151077513258
name: F1
- type: accuracy
value: 0.9847166143263048
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertiny-finetuned-finer
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the finer-139 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0882
- Precision: 0.5339
- Recall: 0.0360
- F1: 0.0675
- Accuracy: 0.9847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0871 | 1.0 | 11255 | 0.0952 | 0.0 | 0.0 | 0.0 | 0.9843 |
| 0.0864 | 2.0 | 22510 | 0.0895 | 0.7640 | 0.0082 | 0.0162 | 0.9844 |
| 0.0929 | 3.0 | 33765 | 0.0882 | 0.5339 | 0.0360 | 0.0675 | 0.9847 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
muhtasham/bert-tiny-finetuned-legal-definitions-downstream-alt | muhtasham | 2023-09-15T15:13:11Z | 138 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-05T05:16:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-tiny-finetuned-legal-definitions-downstream-alt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-finetuned-legal-definitions-downstream-alt
This model is a fine-tuned version of [muhtasham/bert-tiny-finetuned-legal-definitions](https://huggingface.co/muhtasham/bert-tiny-finetuned-legal-definitions) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2298
- Micro f1: 0.0
- Macro f1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4096
- eval_batch_size: 1024
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro f1 | Macro f1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|
| 0.6869 | 1.0 | 3 | 0.6836 | 0.1357 | 0.0837 |
| 0.6832 | 2.0 | 6 | 0.6795 | 0.1402 | 0.0785 |
| 0.6795 | 3.0 | 9 | 0.6755 | 0.1468 | 0.0748 |
| 0.6758 | 4.0 | 12 | 0.6716 | 0.1504 | 0.0697 |
| 0.6722 | 5.0 | 15 | 0.6677 | 0.1506 | 0.0627 |
| 0.6686 | 6.0 | 18 | 0.6639 | 0.1528 | 0.0580 |
| 0.6649 | 7.0 | 21 | 0.6601 | 0.1435 | 0.0506 |
| 0.6615 | 8.0 | 24 | 0.6563 | 0.1300 | 0.0432 |
| 0.658 | 9.0 | 27 | 0.6526 | 0.1049 | 0.0307 |
| 0.6544 | 10.0 | 30 | 0.6490 | 0.0859 | 0.0242 |
| 0.6511 | 11.0 | 33 | 0.6454 | 0.0610 | 0.0178 |
| 0.6476 | 12.0 | 36 | 0.6418 | 0.0362 | 0.0110 |
| 0.644 | 13.0 | 39 | 0.6383 | 0.0219 | 0.0074 |
| 0.641 | 14.0 | 42 | 0.6349 | 0.0108 | 0.0042 |
| 0.6376 | 15.0 | 45 | 0.6315 | 0.0057 | 0.0022 |
| 0.6344 | 16.0 | 48 | 0.6282 | 0.0024 | 0.0010 |
| 0.6311 | 17.0 | 51 | 0.6249 | 0.0012 | 0.0005 |
| 0.6279 | 18.0 | 54 | 0.6217 | 0.0008 | 0.0003 |
| 0.6248 | 19.0 | 57 | 0.6184 | 0.0 | 0.0 |
| 0.6218 | 20.0 | 60 | 0.6153 | 0.0 | 0.0 |
| 0.6185 | 21.0 | 63 | 0.6121 | 0.0 | 0.0 |
| 0.6153 | 22.0 | 66 | 0.6089 | 0.0 | 0.0 |
| 0.6123 | 23.0 | 69 | 0.6058 | 0.0 | 0.0 |
| 0.6092 | 24.0 | 72 | 0.6027 | 0.0 | 0.0 |
| 0.6062 | 25.0 | 75 | 0.5996 | 0.0 | 0.0 |
| 0.6032 | 26.0 | 78 | 0.5965 | 0.0 | 0.0 |
| 0.6001 | 27.0 | 81 | 0.5934 | 0.0 | 0.0 |
| 0.597 | 28.0 | 84 | 0.5902 | 0.0 | 0.0 |
| 0.594 | 29.0 | 87 | 0.5871 | 0.0 | 0.0 |
| 0.5908 | 30.0 | 90 | 0.5840 | 0.0 | 0.0 |
| 0.5877 | 31.0 | 93 | 0.5808 | 0.0 | 0.0 |
| 0.5845 | 32.0 | 96 | 0.5777 | 0.0 | 0.0 |
| 0.5814 | 33.0 | 99 | 0.5745 | 0.0 | 0.0 |
| 0.5785 | 34.0 | 102 | 0.5714 | 0.0 | 0.0 |
| 0.5753 | 35.0 | 105 | 0.5682 | 0.0 | 0.0 |
| 0.5719 | 36.0 | 108 | 0.5650 | 0.0 | 0.0 |
| 0.5688 | 37.0 | 111 | 0.5618 | 0.0 | 0.0 |
| 0.5656 | 38.0 | 114 | 0.5586 | 0.0 | 0.0 |
| 0.5626 | 39.0 | 117 | 0.5553 | 0.0 | 0.0 |
| 0.5591 | 40.0 | 120 | 0.5521 | 0.0 | 0.0 |
| 0.5561 | 41.0 | 123 | 0.5489 | 0.0 | 0.0 |
| 0.5529 | 42.0 | 126 | 0.5456 | 0.0 | 0.0 |
| 0.5494 | 43.0 | 129 | 0.5424 | 0.0 | 0.0 |
| 0.5464 | 44.0 | 132 | 0.5391 | 0.0 | 0.0 |
| 0.5432 | 45.0 | 135 | 0.5359 | 0.0 | 0.0 |
| 0.5399 | 46.0 | 138 | 0.5326 | 0.0 | 0.0 |
| 0.5364 | 47.0 | 141 | 0.5294 | 0.0 | 0.0 |
| 0.5332 | 48.0 | 144 | 0.5261 | 0.0 | 0.0 |
| 0.53 | 49.0 | 147 | 0.5229 | 0.0 | 0.0 |
| 0.5268 | 50.0 | 150 | 0.5196 | 0.0 | 0.0 |
| 0.5236 | 51.0 | 153 | 0.5164 | 0.0 | 0.0 |
| 0.5203 | 52.0 | 156 | 0.5132 | 0.0 | 0.0 |
| 0.5171 | 53.0 | 159 | 0.5099 | 0.0 | 0.0 |
| 0.514 | 54.0 | 162 | 0.5067 | 0.0 | 0.0 |
| 0.5107 | 55.0 | 165 | 0.5035 | 0.0 | 0.0 |
| 0.5077 | 56.0 | 168 | 0.5004 | 0.0 | 0.0 |
| 0.504 | 57.0 | 171 | 0.4972 | 0.0 | 0.0 |
| 0.5011 | 58.0 | 174 | 0.4941 | 0.0 | 0.0 |
| 0.4978 | 59.0 | 177 | 0.4909 | 0.0 | 0.0 |
| 0.4948 | 60.0 | 180 | 0.4878 | 0.0 | 0.0 |
| 0.4915 | 61.0 | 183 | 0.4847 | 0.0 | 0.0 |
| 0.4884 | 62.0 | 186 | 0.4816 | 0.0 | 0.0 |
| 0.4857 | 63.0 | 189 | 0.4786 | 0.0 | 0.0 |
| 0.4824 | 64.0 | 192 | 0.4756 | 0.0 | 0.0 |
| 0.4794 | 65.0 | 195 | 0.4726 | 0.0 | 0.0 |
| 0.4762 | 66.0 | 198 | 0.4696 | 0.0 | 0.0 |
| 0.4733 | 67.0 | 201 | 0.4666 | 0.0 | 0.0 |
| 0.4704 | 68.0 | 204 | 0.4637 | 0.0 | 0.0 |
| 0.4676 | 69.0 | 207 | 0.4608 | 0.0 | 0.0 |
| 0.4648 | 70.0 | 210 | 0.4579 | 0.0 | 0.0 |
| 0.4617 | 71.0 | 213 | 0.4551 | 0.0 | 0.0 |
| 0.4586 | 72.0 | 216 | 0.4523 | 0.0 | 0.0 |
| 0.4558 | 73.0 | 219 | 0.4495 | 0.0 | 0.0 |
| 0.453 | 74.0 | 222 | 0.4467 | 0.0 | 0.0 |
| 0.4502 | 75.0 | 225 | 0.4440 | 0.0 | 0.0 |
| 0.4476 | 76.0 | 228 | 0.4413 | 0.0 | 0.0 |
| 0.4448 | 77.0 | 231 | 0.4386 | 0.0 | 0.0 |
| 0.442 | 78.0 | 234 | 0.4360 | 0.0 | 0.0 |
| 0.4396 | 79.0 | 237 | 0.4334 | 0.0 | 0.0 |
| 0.4372 | 80.0 | 240 | 0.4308 | 0.0 | 0.0 |
| 0.434 | 81.0 | 243 | 0.4282 | 0.0 | 0.0 |
| 0.4318 | 82.0 | 246 | 0.4257 | 0.0 | 0.0 |
| 0.4292 | 83.0 | 249 | 0.4232 | 0.0 | 0.0 |
| 0.4267 | 84.0 | 252 | 0.4208 | 0.0 | 0.0 |
| 0.4242 | 85.0 | 255 | 0.4184 | 0.0 | 0.0 |
| 0.4218 | 86.0 | 258 | 0.4160 | 0.0 | 0.0 |
| 0.4191 | 87.0 | 261 | 0.4136 | 0.0 | 0.0 |
| 0.4169 | 88.0 | 264 | 0.4113 | 0.0 | 0.0 |
| 0.4147 | 89.0 | 267 | 0.4089 | 0.0 | 0.0 |
| 0.4118 | 90.0 | 270 | 0.4067 | 0.0 | 0.0 |
| 0.41 | 91.0 | 273 | 0.4044 | 0.0 | 0.0 |
| 0.4079 | 92.0 | 276 | 0.4022 | 0.0 | 0.0 |
| 0.4052 | 93.0 | 279 | 0.4000 | 0.0 | 0.0 |
| 0.4031 | 94.0 | 282 | 0.3979 | 0.0 | 0.0 |
| 0.4009 | 95.0 | 285 | 0.3957 | 0.0 | 0.0 |
| 0.3986 | 96.0 | 288 | 0.3936 | 0.0 | 0.0 |
| 0.3969 | 97.0 | 291 | 0.3916 | 0.0 | 0.0 |
| 0.3944 | 98.0 | 294 | 0.3895 | 0.0 | 0.0 |
| 0.3928 | 99.0 | 297 | 0.3875 | 0.0 | 0.0 |
| 0.3906 | 100.0 | 300 | 0.3855 | 0.0 | 0.0 |
| 0.3886 | 101.0 | 303 | 0.3836 | 0.0 | 0.0 |
| 0.3864 | 102.0 | 306 | 0.3816 | 0.0 | 0.0 |
| 0.3849 | 103.0 | 309 | 0.3797 | 0.0 | 0.0 |
| 0.3833 | 104.0 | 312 | 0.3779 | 0.0 | 0.0 |
| 0.3809 | 105.0 | 315 | 0.3760 | 0.0 | 0.0 |
| 0.3788 | 106.0 | 318 | 0.3742 | 0.0 | 0.0 |
| 0.3771 | 107.0 | 321 | 0.3724 | 0.0 | 0.0 |
| 0.3751 | 108.0 | 324 | 0.3706 | 0.0 | 0.0 |
| 0.3737 | 109.0 | 327 | 0.3689 | 0.0 | 0.0 |
| 0.3719 | 110.0 | 330 | 0.3672 | 0.0 | 0.0 |
| 0.3702 | 111.0 | 333 | 0.3655 | 0.0 | 0.0 |
| 0.3686 | 112.0 | 336 | 0.3638 | 0.0 | 0.0 |
| 0.3666 | 113.0 | 339 | 0.3622 | 0.0 | 0.0 |
| 0.3654 | 114.0 | 342 | 0.3606 | 0.0 | 0.0 |
| 0.3638 | 115.0 | 345 | 0.3590 | 0.0 | 0.0 |
| 0.3617 | 116.0 | 348 | 0.3574 | 0.0 | 0.0 |
| 0.3603 | 117.0 | 351 | 0.3558 | 0.0 | 0.0 |
| 0.3587 | 118.0 | 354 | 0.3543 | 0.0 | 0.0 |
| 0.3574 | 119.0 | 357 | 0.3528 | 0.0 | 0.0 |
| 0.3561 | 120.0 | 360 | 0.3513 | 0.0 | 0.0 |
| 0.3543 | 121.0 | 363 | 0.3499 | 0.0 | 0.0 |
| 0.3525 | 122.0 | 366 | 0.3484 | 0.0 | 0.0 |
| 0.3509 | 123.0 | 369 | 0.3470 | 0.0 | 0.0 |
| 0.3492 | 124.0 | 372 | 0.3456 | 0.0 | 0.0 |
| 0.3483 | 125.0 | 375 | 0.3442 | 0.0 | 0.0 |
| 0.3467 | 126.0 | 378 | 0.3429 | 0.0 | 0.0 |
| 0.3455 | 127.0 | 381 | 0.3415 | 0.0 | 0.0 |
| 0.344 | 128.0 | 384 | 0.3402 | 0.0 | 0.0 |
| 0.3425 | 129.0 | 387 | 0.3389 | 0.0 | 0.0 |
| 0.3417 | 130.0 | 390 | 0.3376 | 0.0 | 0.0 |
| 0.3403 | 131.0 | 393 | 0.3364 | 0.0 | 0.0 |
| 0.339 | 132.0 | 396 | 0.3351 | 0.0 | 0.0 |
| 0.3377 | 133.0 | 399 | 0.3339 | 0.0 | 0.0 |
| 0.3362 | 134.0 | 402 | 0.3327 | 0.0 | 0.0 |
| 0.3352 | 135.0 | 405 | 0.3315 | 0.0 | 0.0 |
| 0.3338 | 136.0 | 408 | 0.3303 | 0.0 | 0.0 |
| 0.3327 | 137.0 | 411 | 0.3291 | 0.0 | 0.0 |
| 0.3315 | 138.0 | 414 | 0.3280 | 0.0 | 0.0 |
| 0.33 | 139.0 | 417 | 0.3269 | 0.0 | 0.0 |
| 0.3295 | 140.0 | 420 | 0.3258 | 0.0 | 0.0 |
| 0.3278 | 141.0 | 423 | 0.3247 | 0.0 | 0.0 |
| 0.3274 | 142.0 | 426 | 0.3236 | 0.0 | 0.0 |
| 0.3263 | 143.0 | 429 | 0.3225 | 0.0 | 0.0 |
| 0.3245 | 144.0 | 432 | 0.3215 | 0.0 | 0.0 |
| 0.3238 | 145.0 | 435 | 0.3204 | 0.0 | 0.0 |
| 0.3226 | 146.0 | 438 | 0.3194 | 0.0 | 0.0 |
| 0.3217 | 147.0 | 441 | 0.3184 | 0.0 | 0.0 |
| 0.3207 | 148.0 | 444 | 0.3174 | 0.0 | 0.0 |
| 0.3201 | 149.0 | 447 | 0.3164 | 0.0 | 0.0 |
| 0.3191 | 150.0 | 450 | 0.3155 | 0.0 | 0.0 |
| 0.3172 | 151.0 | 453 | 0.3145 | 0.0 | 0.0 |
| 0.316 | 152.0 | 456 | 0.3136 | 0.0 | 0.0 |
| 0.3155 | 153.0 | 459 | 0.3127 | 0.0 | 0.0 |
| 0.3149 | 154.0 | 462 | 0.3117 | 0.0 | 0.0 |
| 0.3139 | 155.0 | 465 | 0.3108 | 0.0 | 0.0 |
| 0.3131 | 156.0 | 468 | 0.3100 | 0.0 | 0.0 |
| 0.3121 | 157.0 | 471 | 0.3091 | 0.0 | 0.0 |
| 0.3113 | 158.0 | 474 | 0.3082 | 0.0 | 0.0 |
| 0.3099 | 159.0 | 477 | 0.3074 | 0.0 | 0.0 |
| 0.3097 | 160.0 | 480 | 0.3065 | 0.0 | 0.0 |
| 0.3087 | 161.0 | 483 | 0.3057 | 0.0 | 0.0 |
| 0.3079 | 162.0 | 486 | 0.3049 | 0.0 | 0.0 |
| 0.3064 | 163.0 | 489 | 0.3041 | 0.0 | 0.0 |
| 0.3062 | 164.0 | 492 | 0.3033 | 0.0 | 0.0 |
| 0.3055 | 165.0 | 495 | 0.3025 | 0.0 | 0.0 |
| 0.3045 | 166.0 | 498 | 0.3017 | 0.0 | 0.0 |
| 0.3038 | 167.0 | 501 | 0.3010 | 0.0 | 0.0 |
| 0.3029 | 168.0 | 504 | 0.3002 | 0.0 | 0.0 |
| 0.3025 | 169.0 | 507 | 0.2995 | 0.0 | 0.0 |
| 0.3016 | 170.0 | 510 | 0.2987 | 0.0 | 0.0 |
| 0.3005 | 171.0 | 513 | 0.2980 | 0.0 | 0.0 |
| 0.3003 | 172.0 | 516 | 0.2973 | 0.0 | 0.0 |
| 0.2989 | 173.0 | 519 | 0.2966 | 0.0 | 0.0 |
| 0.2987 | 174.0 | 522 | 0.2959 | 0.0 | 0.0 |
| 0.2974 | 175.0 | 525 | 0.2952 | 0.0 | 0.0 |
| 0.2974 | 176.0 | 528 | 0.2945 | 0.0 | 0.0 |
| 0.2967 | 177.0 | 531 | 0.2939 | 0.0 | 0.0 |
| 0.2954 | 178.0 | 534 | 0.2932 | 0.0 | 0.0 |
| 0.295 | 179.0 | 537 | 0.2926 | 0.0 | 0.0 |
| 0.2944 | 180.0 | 540 | 0.2919 | 0.0 | 0.0 |
| 0.2938 | 181.0 | 543 | 0.2913 | 0.0 | 0.0 |
| 0.2932 | 182.0 | 546 | 0.2906 | 0.0 | 0.0 |
| 0.2923 | 183.0 | 549 | 0.2900 | 0.0 | 0.0 |
| 0.2917 | 184.0 | 552 | 0.2894 | 0.0 | 0.0 |
| 0.2914 | 185.0 | 555 | 0.2888 | 0.0 | 0.0 |
| 0.2906 | 186.0 | 558 | 0.2882 | 0.0 | 0.0 |
| 0.29 | 187.0 | 561 | 0.2876 | 0.0 | 0.0 |
| 0.2893 | 188.0 | 564 | 0.2870 | 0.0 | 0.0 |
| 0.2892 | 189.0 | 567 | 0.2865 | 0.0 | 0.0 |
| 0.2882 | 190.0 | 570 | 0.2859 | 0.0 | 0.0 |
| 0.2874 | 191.0 | 573 | 0.2853 | 0.0 | 0.0 |
| 0.2868 | 192.0 | 576 | 0.2848 | 0.0 | 0.0 |
| 0.2866 | 193.0 | 579 | 0.2843 | 0.0 | 0.0 |
| 0.2865 | 194.0 | 582 | 0.2837 | 0.0 | 0.0 |
| 0.285 | 195.0 | 585 | 0.2832 | 0.0 | 0.0 |
| 0.2851 | 196.0 | 588 | 0.2827 | 0.0 | 0.0 |
| 0.2841 | 197.0 | 591 | 0.2821 | 0.0 | 0.0 |
| 0.2839 | 198.0 | 594 | 0.2816 | 0.0 | 0.0 |
| 0.2829 | 199.0 | 597 | 0.2811 | 0.0 | 0.0 |
| 0.2826 | 200.0 | 600 | 0.2806 | 0.0 | 0.0 |
| 0.2825 | 201.0 | 603 | 0.2801 | 0.0 | 0.0 |
| 0.2817 | 202.0 | 606 | 0.2796 | 0.0 | 0.0 |
| 0.2813 | 203.0 | 609 | 0.2792 | 0.0 | 0.0 |
| 0.2805 | 204.0 | 612 | 0.2787 | 0.0 | 0.0 |
| 0.2803 | 205.0 | 615 | 0.2782 | 0.0 | 0.0 |
| 0.2798 | 206.0 | 618 | 0.2777 | 0.0 | 0.0 |
| 0.2795 | 207.0 | 621 | 0.2773 | 0.0 | 0.0 |
| 0.2788 | 208.0 | 624 | 0.2768 | 0.0 | 0.0 |
| 0.2785 | 209.0 | 627 | 0.2764 | 0.0 | 0.0 |
| 0.2781 | 210.0 | 630 | 0.2759 | 0.0 | 0.0 |
| 0.2779 | 211.0 | 633 | 0.2755 | 0.0 | 0.0 |
| 0.277 | 212.0 | 636 | 0.2751 | 0.0 | 0.0 |
| 0.2768 | 213.0 | 639 | 0.2746 | 0.0 | 0.0 |
| 0.2763 | 214.0 | 642 | 0.2742 | 0.0 | 0.0 |
| 0.2756 | 215.0 | 645 | 0.2738 | 0.0 | 0.0 |
| 0.2756 | 216.0 | 648 | 0.2734 | 0.0 | 0.0 |
| 0.2748 | 217.0 | 651 | 0.2730 | 0.0 | 0.0 |
| 0.2739 | 218.0 | 654 | 0.2726 | 0.0 | 0.0 |
| 0.2741 | 219.0 | 657 | 0.2722 | 0.0 | 0.0 |
| 0.2735 | 220.0 | 660 | 0.2718 | 0.0 | 0.0 |
| 0.2736 | 221.0 | 663 | 0.2714 | 0.0 | 0.0 |
| 0.2729 | 222.0 | 666 | 0.2710 | 0.0 | 0.0 |
| 0.2728 | 223.0 | 669 | 0.2706 | 0.0 | 0.0 |
| 0.2725 | 224.0 | 672 | 0.2702 | 0.0 | 0.0 |
| 0.2719 | 225.0 | 675 | 0.2699 | 0.0 | 0.0 |
| 0.2712 | 226.0 | 678 | 0.2695 | 0.0 | 0.0 |
| 0.2708 | 227.0 | 681 | 0.2691 | 0.0 | 0.0 |
| 0.2707 | 228.0 | 684 | 0.2688 | 0.0 | 0.0 |
| 0.27 | 229.0 | 687 | 0.2684 | 0.0 | 0.0 |
| 0.2697 | 230.0 | 690 | 0.2681 | 0.0 | 0.0 |
| 0.2702 | 231.0 | 693 | 0.2677 | 0.0 | 0.0 |
| 0.2693 | 232.0 | 696 | 0.2674 | 0.0 | 0.0 |
| 0.2686 | 233.0 | 699 | 0.2670 | 0.0 | 0.0 |
| 0.2681 | 234.0 | 702 | 0.2667 | 0.0 | 0.0 |
| 0.2684 | 235.0 | 705 | 0.2664 | 0.0 | 0.0 |
| 0.2681 | 236.0 | 708 | 0.2660 | 0.0 | 0.0 |
| 0.2672 | 237.0 | 711 | 0.2657 | 0.0 | 0.0 |
| 0.2676 | 238.0 | 714 | 0.2654 | 0.0 | 0.0 |
| 0.2672 | 239.0 | 717 | 0.2651 | 0.0 | 0.0 |
| 0.2658 | 240.0 | 720 | 0.2648 | 0.0 | 0.0 |
| 0.2662 | 241.0 | 723 | 0.2645 | 0.0 | 0.0 |
| 0.2658 | 242.0 | 726 | 0.2642 | 0.0 | 0.0 |
| 0.2654 | 243.0 | 729 | 0.2638 | 0.0 | 0.0 |
| 0.2651 | 244.0 | 732 | 0.2635 | 0.0 | 0.0 |
| 0.2643 | 245.0 | 735 | 0.2632 | 0.0 | 0.0 |
| 0.2651 | 246.0 | 738 | 0.2630 | 0.0 | 0.0 |
| 0.2644 | 247.0 | 741 | 0.2627 | 0.0 | 0.0 |
| 0.2639 | 248.0 | 744 | 0.2624 | 0.0 | 0.0 |
| 0.2636 | 249.0 | 747 | 0.2621 | 0.0 | 0.0 |
| 0.2636 | 250.0 | 750 | 0.2618 | 0.0 | 0.0 |
| 0.2627 | 251.0 | 753 | 0.2615 | 0.0 | 0.0 |
| 0.2623 | 252.0 | 756 | 0.2613 | 0.0 | 0.0 |
| 0.2626 | 253.0 | 759 | 0.2610 | 0.0 | 0.0 |
| 0.2621 | 254.0 | 762 | 0.2607 | 0.0 | 0.0 |
| 0.2626 | 255.0 | 765 | 0.2604 | 0.0 | 0.0 |
| 0.2623 | 256.0 | 768 | 0.2602 | 0.0 | 0.0 |
| 0.2618 | 257.0 | 771 | 0.2599 | 0.0 | 0.0 |
| 0.2612 | 258.0 | 774 | 0.2597 | 0.0 | 0.0 |
| 0.2604 | 259.0 | 777 | 0.2594 | 0.0 | 0.0 |
| 0.2609 | 260.0 | 780 | 0.2591 | 0.0 | 0.0 |
| 0.2601 | 261.0 | 783 | 0.2589 | 0.0 | 0.0 |
| 0.2597 | 262.0 | 786 | 0.2586 | 0.0 | 0.0 |
| 0.2597 | 263.0 | 789 | 0.2584 | 0.0 | 0.0 |
| 0.2594 | 264.0 | 792 | 0.2582 | 0.0 | 0.0 |
| 0.2598 | 265.0 | 795 | 0.2579 | 0.0 | 0.0 |
| 0.2599 | 266.0 | 798 | 0.2577 | 0.0 | 0.0 |
| 0.2588 | 267.0 | 801 | 0.2574 | 0.0 | 0.0 |
| 0.2592 | 268.0 | 804 | 0.2572 | 0.0 | 0.0 |
| 0.2586 | 269.0 | 807 | 0.2570 | 0.0 | 0.0 |
| 0.2594 | 270.0 | 810 | 0.2568 | 0.0 | 0.0 |
| 0.258 | 271.0 | 813 | 0.2565 | 0.0 | 0.0 |
| 0.257 | 272.0 | 816 | 0.2563 | 0.0 | 0.0 |
| 0.2576 | 273.0 | 819 | 0.2561 | 0.0 | 0.0 |
| 0.257 | 274.0 | 822 | 0.2559 | 0.0 | 0.0 |
| 0.2568 | 275.0 | 825 | 0.2556 | 0.0 | 0.0 |
| 0.2558 | 276.0 | 828 | 0.2554 | 0.0 | 0.0 |
| 0.2567 | 277.0 | 831 | 0.2552 | 0.0 | 0.0 |
| 0.2568 | 278.0 | 834 | 0.2550 | 0.0 | 0.0 |
| 0.2561 | 279.0 | 837 | 0.2548 | 0.0 | 0.0 |
| 0.2562 | 280.0 | 840 | 0.2546 | 0.0 | 0.0 |
| 0.2564 | 281.0 | 843 | 0.2544 | 0.0 | 0.0 |
| 0.2555 | 282.0 | 846 | 0.2542 | 0.0 | 0.0 |
| 0.2556 | 283.0 | 849 | 0.2540 | 0.0 | 0.0 |
| 0.2554 | 284.0 | 852 | 0.2538 | 0.0 | 0.0 |
| 0.2542 | 285.0 | 855 | 0.2536 | 0.0 | 0.0 |
| 0.2545 | 286.0 | 858 | 0.2534 | 0.0 | 0.0 |
| 0.2542 | 287.0 | 861 | 0.2532 | 0.0 | 0.0 |
| 0.2545 | 288.0 | 864 | 0.2530 | 0.0 | 0.0 |
| 0.254 | 289.0 | 867 | 0.2528 | 0.0 | 0.0 |
| 0.2543 | 290.0 | 870 | 0.2526 | 0.0 | 0.0 |
| 0.254 | 291.0 | 873 | 0.2524 | 0.0 | 0.0 |
| 0.2536 | 292.0 | 876 | 0.2523 | 0.0 | 0.0 |
| 0.2536 | 293.0 | 879 | 0.2521 | 0.0 | 0.0 |
| 0.2533 | 294.0 | 882 | 0.2519 | 0.0 | 0.0 |
| 0.2532 | 295.0 | 885 | 0.2517 | 0.0 | 0.0 |
| 0.2531 | 296.0 | 888 | 0.2515 | 0.0 | 0.0 |
| 0.2529 | 297.0 | 891 | 0.2514 | 0.0 | 0.0 |
| 0.2522 | 298.0 | 894 | 0.2512 | 0.0 | 0.0 |
| 0.2527 | 299.0 | 897 | 0.2510 | 0.0 | 0.0 |
| 0.2523 | 300.0 | 900 | 0.2508 | 0.0 | 0.0 |
| 0.2518 | 301.0 | 903 | 0.2507 | 0.0 | 0.0 |
| 0.2515 | 302.0 | 906 | 0.2505 | 0.0 | 0.0 |
| 0.2513 | 303.0 | 909 | 0.2503 | 0.0 | 0.0 |
| 0.2521 | 304.0 | 912 | 0.2502 | 0.0 | 0.0 |
| 0.2514 | 305.0 | 915 | 0.2500 | 0.0 | 0.0 |
| 0.2505 | 306.0 | 918 | 0.2499 | 0.0 | 0.0 |
| 0.2511 | 307.0 | 921 | 0.2497 | 0.0 | 0.0 |
| 0.251 | 308.0 | 924 | 0.2495 | 0.0 | 0.0 |
| 0.2504 | 309.0 | 927 | 0.2494 | 0.0 | 0.0 |
| 0.2503 | 310.0 | 930 | 0.2492 | 0.0 | 0.0 |
| 0.2504 | 311.0 | 933 | 0.2491 | 0.0 | 0.0 |
| 0.2506 | 312.0 | 936 | 0.2489 | 0.0 | 0.0 |
| 0.2494 | 313.0 | 939 | 0.2488 | 0.0 | 0.0 |
| 0.2491 | 314.0 | 942 | 0.2486 | 0.0 | 0.0 |
| 0.2498 | 315.0 | 945 | 0.2485 | 0.0 | 0.0 |
| 0.2498 | 316.0 | 948 | 0.2483 | 0.0 | 0.0 |
| 0.2491 | 317.0 | 951 | 0.2482 | 0.0 | 0.0 |
| 0.25 | 318.0 | 954 | 0.2480 | 0.0 | 0.0 |
| 0.2493 | 319.0 | 957 | 0.2479 | 0.0 | 0.0 |
| 0.2491 | 320.0 | 960 | 0.2478 | 0.0 | 0.0 |
| 0.2489 | 321.0 | 963 | 0.2476 | 0.0 | 0.0 |
| 0.2484 | 322.0 | 966 | 0.2475 | 0.0 | 0.0 |
| 0.2481 | 323.0 | 969 | 0.2473 | 0.0 | 0.0 |
| 0.248 | 324.0 | 972 | 0.2472 | 0.0 | 0.0 |
| 0.2485 | 325.0 | 975 | 0.2471 | 0.0 | 0.0 |
| 0.2485 | 326.0 | 978 | 0.2469 | 0.0 | 0.0 |
| 0.2477 | 327.0 | 981 | 0.2468 | 0.0 | 0.0 |
| 0.2478 | 328.0 | 984 | 0.2467 | 0.0 | 0.0 |
| 0.2476 | 329.0 | 987 | 0.2465 | 0.0 | 0.0 |
| 0.2481 | 330.0 | 990 | 0.2464 | 0.0 | 0.0 |
| 0.2472 | 331.0 | 993 | 0.2463 | 0.0 | 0.0 |
| 0.247 | 332.0 | 996 | 0.2462 | 0.0 | 0.0 |
| 0.2471 | 333.0 | 999 | 0.2460 | 0.0 | 0.0 |
| 0.2471 | 334.0 | 1002 | 0.2459 | 0.0 | 0.0 |
| 0.2472 | 335.0 | 1005 | 0.2458 | 0.0 | 0.0 |
| 0.2467 | 336.0 | 1008 | 0.2457 | 0.0 | 0.0 |
| 0.246 | 337.0 | 1011 | 0.2455 | 0.0 | 0.0 |
| 0.2469 | 338.0 | 1014 | 0.2454 | 0.0 | 0.0 |
| 0.2465 | 339.0 | 1017 | 0.2453 | 0.0 | 0.0 |
| 0.2467 | 340.0 | 1020 | 0.2452 | 0.0 | 0.0 |
| 0.246 | 341.0 | 1023 | 0.2451 | 0.0 | 0.0 |
| 0.2456 | 342.0 | 1026 | 0.2450 | 0.0 | 0.0 |
| 0.2454 | 343.0 | 1029 | 0.2448 | 0.0 | 0.0 |
| 0.2464 | 344.0 | 1032 | 0.2447 | 0.0 | 0.0 |
| 0.2453 | 345.0 | 1035 | 0.2446 | 0.0 | 0.0 |
| 0.2453 | 346.0 | 1038 | 0.2445 | 0.0 | 0.0 |
| 0.2459 | 347.0 | 1041 | 0.2444 | 0.0 | 0.0 |
| 0.2452 | 348.0 | 1044 | 0.2443 | 0.0 | 0.0 |
| 0.2452 | 349.0 | 1047 | 0.2442 | 0.0 | 0.0 |
| 0.2454 | 350.0 | 1050 | 0.2441 | 0.0 | 0.0 |
| 0.245 | 351.0 | 1053 | 0.2440 | 0.0 | 0.0 |
| 0.2442 | 352.0 | 1056 | 0.2439 | 0.0 | 0.0 |
| 0.2448 | 353.0 | 1059 | 0.2437 | 0.0 | 0.0 |
| 0.2452 | 354.0 | 1062 | 0.2436 | 0.0 | 0.0 |
| 0.2449 | 355.0 | 1065 | 0.2435 | 0.0 | 0.0 |
| 0.2444 | 356.0 | 1068 | 0.2434 | 0.0 | 0.0 |
| 0.2443 | 357.0 | 1071 | 0.2433 | 0.0 | 0.0 |
| 0.2444 | 358.0 | 1074 | 0.2432 | 0.0 | 0.0 |
| 0.2442 | 359.0 | 1077 | 0.2431 | 0.0 | 0.0 |
| 0.2439 | 360.0 | 1080 | 0.2430 | 0.0 | 0.0 |
| 0.2438 | 361.0 | 1083 | 0.2429 | 0.0 | 0.0 |
| 0.2443 | 362.0 | 1086 | 0.2428 | 0.0 | 0.0 |
| 0.244 | 363.0 | 1089 | 0.2427 | 0.0 | 0.0 |
| 0.2435 | 364.0 | 1092 | 0.2426 | 0.0 | 0.0 |
| 0.2441 | 365.0 | 1095 | 0.2425 | 0.0 | 0.0 |
| 0.2435 | 366.0 | 1098 | 0.2425 | 0.0 | 0.0 |
| 0.2432 | 367.0 | 1101 | 0.2424 | 0.0 | 0.0 |
| 0.243 | 368.0 | 1104 | 0.2423 | 0.0 | 0.0 |
| 0.243 | 369.0 | 1107 | 0.2422 | 0.0 | 0.0 |
| 0.2433 | 370.0 | 1110 | 0.2421 | 0.0 | 0.0 |
| 0.2434 | 371.0 | 1113 | 0.2420 | 0.0 | 0.0 |
| 0.2423 | 372.0 | 1116 | 0.2419 | 0.0 | 0.0 |
| 0.2436 | 373.0 | 1119 | 0.2418 | 0.0 | 0.0 |
| 0.2435 | 374.0 | 1122 | 0.2417 | 0.0 | 0.0 |
| 0.2424 | 375.0 | 1125 | 0.2416 | 0.0 | 0.0 |
| 0.2423 | 376.0 | 1128 | 0.2416 | 0.0 | 0.0 |
| 0.2424 | 377.0 | 1131 | 0.2415 | 0.0 | 0.0 |
| 0.2428 | 378.0 | 1134 | 0.2414 | 0.0 | 0.0 |
| 0.2425 | 379.0 | 1137 | 0.2413 | 0.0 | 0.0 |
| 0.2417 | 380.0 | 1140 | 0.2412 | 0.0 | 0.0 |
| 0.2419 | 381.0 | 1143 | 0.2411 | 0.0 | 0.0 |
| 0.2422 | 382.0 | 1146 | 0.2411 | 0.0 | 0.0 |
| 0.2422 | 383.0 | 1149 | 0.2410 | 0.0 | 0.0 |
| 0.2422 | 384.0 | 1152 | 0.2409 | 0.0 | 0.0 |
| 0.2414 | 385.0 | 1155 | 0.2408 | 0.0 | 0.0 |
| 0.2414 | 386.0 | 1158 | 0.2407 | 0.0 | 0.0 |
| 0.2421 | 387.0 | 1161 | 0.2406 | 0.0 | 0.0 |
| 0.2418 | 388.0 | 1164 | 0.2406 | 0.0 | 0.0 |
| 0.2416 | 389.0 | 1167 | 0.2405 | 0.0 | 0.0 |
| 0.2417 | 390.0 | 1170 | 0.2404 | 0.0 | 0.0 |
| 0.2409 | 391.0 | 1173 | 0.2403 | 0.0 | 0.0 |
| 0.2411 | 392.0 | 1176 | 0.2403 | 0.0 | 0.0 |
| 0.242 | 393.0 | 1179 | 0.2402 | 0.0 | 0.0 |
| 0.2406 | 394.0 | 1182 | 0.2401 | 0.0 | 0.0 |
| 0.2409 | 395.0 | 1185 | 0.2400 | 0.0 | 0.0 |
| 0.2408 | 396.0 | 1188 | 0.2400 | 0.0 | 0.0 |
| 0.2412 | 397.0 | 1191 | 0.2399 | 0.0 | 0.0 |
| 0.2407 | 398.0 | 1194 | 0.2398 | 0.0 | 0.0 |
| 0.2409 | 399.0 | 1197 | 0.2397 | 0.0 | 0.0 |
| 0.2412 | 400.0 | 1200 | 0.2397 | 0.0 | 0.0 |
| 0.241 | 401.0 | 1203 | 0.2396 | 0.0 | 0.0 |
| 0.2407 | 402.0 | 1206 | 0.2395 | 0.0 | 0.0 |
| 0.2405 | 403.0 | 1209 | 0.2395 | 0.0 | 0.0 |
| 0.2401 | 404.0 | 1212 | 0.2394 | 0.0 | 0.0 |
| 0.2395 | 405.0 | 1215 | 0.2393 | 0.0 | 0.0 |
| 0.2406 | 406.0 | 1218 | 0.2393 | 0.0 | 0.0 |
| 0.2399 | 407.0 | 1221 | 0.2392 | 0.0 | 0.0 |
| 0.2402 | 408.0 | 1224 | 0.2391 | 0.0 | 0.0 |
| 0.24 | 409.0 | 1227 | 0.2391 | 0.0 | 0.0 |
| 0.2394 | 410.0 | 1230 | 0.2390 | 0.0 | 0.0 |
| 0.24 | 411.0 | 1233 | 0.2389 | 0.0 | 0.0 |
| 0.2397 | 412.0 | 1236 | 0.2389 | 0.0 | 0.0 |
| 0.2398 | 413.0 | 1239 | 0.2388 | 0.0 | 0.0 |
| 0.2394 | 414.0 | 1242 | 0.2387 | 0.0 | 0.0 |
| 0.2394 | 415.0 | 1245 | 0.2387 | 0.0 | 0.0 |
| 0.2394 | 416.0 | 1248 | 0.2386 | 0.0 | 0.0 |
| 0.2386 | 417.0 | 1251 | 0.2385 | 0.0 | 0.0 |
| 0.2395 | 418.0 | 1254 | 0.2385 | 0.0 | 0.0 |
| 0.239 | 419.0 | 1257 | 0.2384 | 0.0 | 0.0 |
| 0.2402 | 420.0 | 1260 | 0.2384 | 0.0 | 0.0 |
| 0.2394 | 421.0 | 1263 | 0.2383 | 0.0 | 0.0 |
| 0.2391 | 422.0 | 1266 | 0.2382 | 0.0 | 0.0 |
| 0.2388 | 423.0 | 1269 | 0.2382 | 0.0 | 0.0 |
| 0.2389 | 424.0 | 1272 | 0.2381 | 0.0 | 0.0 |
| 0.2385 | 425.0 | 1275 | 0.2381 | 0.0 | 0.0 |
| 0.2393 | 426.0 | 1278 | 0.2380 | 0.0 | 0.0 |
| 0.2387 | 427.0 | 1281 | 0.2379 | 0.0 | 0.0 |
| 0.2384 | 428.0 | 1284 | 0.2379 | 0.0 | 0.0 |
| 0.2386 | 429.0 | 1287 | 0.2378 | 0.0 | 0.0 |
| 0.2389 | 430.0 | 1290 | 0.2378 | 0.0 | 0.0 |
| 0.2385 | 431.0 | 1293 | 0.2377 | 0.0 | 0.0 |
| 0.2388 | 432.0 | 1296 | 0.2377 | 0.0 | 0.0 |
| 0.2378 | 433.0 | 1299 | 0.2376 | 0.0 | 0.0 |
| 0.2385 | 434.0 | 1302 | 0.2376 | 0.0 | 0.0 |
| 0.2382 | 435.0 | 1305 | 0.2375 | 0.0 | 0.0 |
| 0.238 | 436.0 | 1308 | 0.2374 | 0.0 | 0.0 |
| 0.2383 | 437.0 | 1311 | 0.2374 | 0.0 | 0.0 |
| 0.2379 | 438.0 | 1314 | 0.2373 | 0.0 | 0.0 |
| 0.2381 | 439.0 | 1317 | 0.2373 | 0.0 | 0.0 |
| 0.2373 | 440.0 | 1320 | 0.2372 | 0.0 | 0.0 |
| 0.2381 | 441.0 | 1323 | 0.2372 | 0.0 | 0.0 |
| 0.238 | 442.0 | 1326 | 0.2371 | 0.0 | 0.0 |
| 0.2383 | 443.0 | 1329 | 0.2371 | 0.0 | 0.0 |
| 0.2375 | 444.0 | 1332 | 0.2370 | 0.0 | 0.0 |
| 0.2378 | 445.0 | 1335 | 0.2370 | 0.0 | 0.0 |
| 0.2379 | 446.0 | 1338 | 0.2369 | 0.0 | 0.0 |
| 0.2379 | 447.0 | 1341 | 0.2369 | 0.0 | 0.0 |
| 0.2379 | 448.0 | 1344 | 0.2368 | 0.0 | 0.0 |
| 0.2372 | 449.0 | 1347 | 0.2368 | 0.0 | 0.0 |
| 0.2385 | 450.0 | 1350 | 0.2367 | 0.0 | 0.0 |
| 0.2382 | 451.0 | 1353 | 0.2367 | 0.0 | 0.0 |
| 0.2375 | 452.0 | 1356 | 0.2366 | 0.0 | 0.0 |
| 0.2366 | 453.0 | 1359 | 0.2366 | 0.0 | 0.0 |
| 0.2377 | 454.0 | 1362 | 0.2365 | 0.0 | 0.0 |
| 0.2375 | 455.0 | 1365 | 0.2365 | 0.0 | 0.0 |
| 0.2374 | 456.0 | 1368 | 0.2365 | 0.0 | 0.0 |
| 0.2374 | 457.0 | 1371 | 0.2364 | 0.0 | 0.0 |
| 0.2376 | 458.0 | 1374 | 0.2364 | 0.0 | 0.0 |
| 0.2368 | 459.0 | 1377 | 0.2363 | 0.0 | 0.0 |
| 0.237 | 460.0 | 1380 | 0.2363 | 0.0 | 0.0 |
| 0.237 | 461.0 | 1383 | 0.2362 | 0.0 | 0.0 |
| 0.2373 | 462.0 | 1386 | 0.2362 | 0.0 | 0.0 |
| 0.2374 | 463.0 | 1389 | 0.2361 | 0.0 | 0.0 |
| 0.2369 | 464.0 | 1392 | 0.2361 | 0.0 | 0.0 |
| 0.2371 | 465.0 | 1395 | 0.2361 | 0.0 | 0.0 |
| 0.2364 | 466.0 | 1398 | 0.2360 | 0.0 | 0.0 |
| 0.2361 | 467.0 | 1401 | 0.2360 | 0.0 | 0.0 |
| 0.2369 | 468.0 | 1404 | 0.2359 | 0.0 | 0.0 |
| 0.2365 | 469.0 | 1407 | 0.2359 | 0.0 | 0.0 |
| 0.2365 | 470.0 | 1410 | 0.2358 | 0.0 | 0.0 |
| 0.2369 | 471.0 | 1413 | 0.2358 | 0.0 | 0.0 |
| 0.2356 | 472.0 | 1416 | 0.2358 | 0.0 | 0.0 |
| 0.2373 | 473.0 | 1419 | 0.2357 | 0.0 | 0.0 |
| 0.2361 | 474.0 | 1422 | 0.2357 | 0.0 | 0.0 |
| 0.2367 | 475.0 | 1425 | 0.2356 | 0.0 | 0.0 |
| 0.2372 | 476.0 | 1428 | 0.2356 | 0.0 | 0.0 |
| 0.2358 | 477.0 | 1431 | 0.2356 | 0.0 | 0.0 |
| 0.2357 | 478.0 | 1434 | 0.2355 | 0.0 | 0.0 |
| 0.2362 | 479.0 | 1437 | 0.2355 | 0.0 | 0.0 |
| 0.236 | 480.0 | 1440 | 0.2354 | 0.0 | 0.0 |
| 0.2358 | 481.0 | 1443 | 0.2354 | 0.0 | 0.0 |
| 0.2363 | 482.0 | 1446 | 0.2354 | 0.0 | 0.0 |
| 0.2361 | 483.0 | 1449 | 0.2353 | 0.0 | 0.0 |
| 0.236 | 484.0 | 1452 | 0.2353 | 0.0 | 0.0 |
| 0.2362 | 485.0 | 1455 | 0.2352 | 0.0 | 0.0 |
| 0.2357 | 486.0 | 1458 | 0.2352 | 0.0 | 0.0 |
| 0.2357 | 487.0 | 1461 | 0.2352 | 0.0 | 0.0 |
| 0.2351 | 488.0 | 1464 | 0.2351 | 0.0 | 0.0 |
| 0.2353 | 489.0 | 1467 | 0.2351 | 0.0 | 0.0 |
| 0.2353 | 490.0 | 1470 | 0.2351 | 0.0 | 0.0 |
| 0.2359 | 491.0 | 1473 | 0.2350 | 0.0 | 0.0 |
| 0.2363 | 492.0 | 1476 | 0.2350 | 0.0 | 0.0 |
| 0.2357 | 493.0 | 1479 | 0.2350 | 0.0 | 0.0 |
| 0.2356 | 494.0 | 1482 | 0.2349 | 0.0 | 0.0 |
| 0.2365 | 495.0 | 1485 | 0.2349 | 0.0 | 0.0 |
| 0.2357 | 496.0 | 1488 | 0.2348 | 0.0 | 0.0 |
| 0.2353 | 497.0 | 1491 | 0.2348 | 0.0 | 0.0 |
| 0.2353 | 498.0 | 1494 | 0.2348 | 0.0 | 0.0 |
| 0.2357 | 499.0 | 1497 | 0.2347 | 0.0 | 0.0 |
| 0.2361 | 500.0 | 1500 | 0.2347 | 0.0 | 0.0 |
| 0.2354 | 501.0 | 1503 | 0.2347 | 0.0 | 0.0 |
| 0.2348 | 502.0 | 1506 | 0.2346 | 0.0 | 0.0 |
| 0.2356 | 503.0 | 1509 | 0.2346 | 0.0 | 0.0 |
| 0.2355 | 504.0 | 1512 | 0.2346 | 0.0 | 0.0 |
| 0.2352 | 505.0 | 1515 | 0.2345 | 0.0 | 0.0 |
| 0.2362 | 506.0 | 1518 | 0.2345 | 0.0 | 0.0 |
| 0.2349 | 507.0 | 1521 | 0.2345 | 0.0 | 0.0 |
| 0.2352 | 508.0 | 1524 | 0.2345 | 0.0 | 0.0 |
| 0.2355 | 509.0 | 1527 | 0.2344 | 0.0 | 0.0 |
| 0.2357 | 510.0 | 1530 | 0.2344 | 0.0 | 0.0 |
| 0.2344 | 511.0 | 1533 | 0.2344 | 0.0 | 0.0 |
| 0.2356 | 512.0 | 1536 | 0.2343 | 0.0 | 0.0 |
| 0.2353 | 513.0 | 1539 | 0.2343 | 0.0 | 0.0 |
| 0.2351 | 514.0 | 1542 | 0.2343 | 0.0 | 0.0 |
| 0.2354 | 515.0 | 1545 | 0.2342 | 0.0 | 0.0 |
| 0.2354 | 516.0 | 1548 | 0.2342 | 0.0 | 0.0 |
| 0.2349 | 517.0 | 1551 | 0.2342 | 0.0 | 0.0 |
| 0.2355 | 518.0 | 1554 | 0.2341 | 0.0 | 0.0 |
| 0.2353 | 519.0 | 1557 | 0.2341 | 0.0 | 0.0 |
| 0.2347 | 520.0 | 1560 | 0.2341 | 0.0 | 0.0 |
| 0.2358 | 521.0 | 1563 | 0.2341 | 0.0 | 0.0 |
| 0.2341 | 522.0 | 1566 | 0.2340 | 0.0 | 0.0 |
| 0.2341 | 523.0 | 1569 | 0.2340 | 0.0 | 0.0 |
| 0.2344 | 524.0 | 1572 | 0.2340 | 0.0 | 0.0 |
| 0.2348 | 525.0 | 1575 | 0.2339 | 0.0 | 0.0 |
| 0.2349 | 526.0 | 1578 | 0.2339 | 0.0 | 0.0 |
| 0.2339 | 527.0 | 1581 | 0.2339 | 0.0 | 0.0 |
| 0.2347 | 528.0 | 1584 | 0.2339 | 0.0 | 0.0 |
| 0.2341 | 529.0 | 1587 | 0.2338 | 0.0 | 0.0 |
| 0.2344 | 530.0 | 1590 | 0.2338 | 0.0 | 0.0 |
| 0.2344 | 531.0 | 1593 | 0.2338 | 0.0 | 0.0 |
| 0.2347 | 532.0 | 1596 | 0.2337 | 0.0 | 0.0 |
| 0.2345 | 533.0 | 1599 | 0.2337 | 0.0 | 0.0 |
| 0.2345 | 534.0 | 1602 | 0.2337 | 0.0 | 0.0 |
| 0.2339 | 535.0 | 1605 | 0.2337 | 0.0 | 0.0 |
| 0.2342 | 536.0 | 1608 | 0.2336 | 0.0 | 0.0 |
| 0.234 | 537.0 | 1611 | 0.2336 | 0.0 | 0.0 |
| 0.2346 | 538.0 | 1614 | 0.2336 | 0.0 | 0.0 |
| 0.2343 | 539.0 | 1617 | 0.2336 | 0.0 | 0.0 |
| 0.2346 | 540.0 | 1620 | 0.2335 | 0.0 | 0.0 |
| 0.2333 | 541.0 | 1623 | 0.2335 | 0.0 | 0.0 |
| 0.2339 | 542.0 | 1626 | 0.2335 | 0.0 | 0.0 |
| 0.2335 | 543.0 | 1629 | 0.2335 | 0.0 | 0.0 |
| 0.2342 | 544.0 | 1632 | 0.2334 | 0.0 | 0.0 |
| 0.2335 | 545.0 | 1635 | 0.2334 | 0.0 | 0.0 |
| 0.2341 | 546.0 | 1638 | 0.2334 | 0.0 | 0.0 |
| 0.234 | 547.0 | 1641 | 0.2334 | 0.0 | 0.0 |
| 0.2342 | 548.0 | 1644 | 0.2333 | 0.0 | 0.0 |
| 0.2334 | 549.0 | 1647 | 0.2333 | 0.0 | 0.0 |
| 0.2341 | 550.0 | 1650 | 0.2333 | 0.0 | 0.0 |
| 0.2338 | 551.0 | 1653 | 0.2333 | 0.0 | 0.0 |
| 0.2336 | 552.0 | 1656 | 0.2332 | 0.0 | 0.0 |
| 0.2335 | 553.0 | 1659 | 0.2332 | 0.0 | 0.0 |
| 0.2334 | 554.0 | 1662 | 0.2332 | 0.0 | 0.0 |
| 0.2339 | 555.0 | 1665 | 0.2332 | 0.0 | 0.0 |
| 0.2333 | 556.0 | 1668 | 0.2331 | 0.0 | 0.0 |
| 0.2337 | 557.0 | 1671 | 0.2331 | 0.0 | 0.0 |
| 0.2333 | 558.0 | 1674 | 0.2331 | 0.0 | 0.0 |
| 0.2339 | 559.0 | 1677 | 0.2331 | 0.0 | 0.0 |
| 0.2332 | 560.0 | 1680 | 0.2331 | 0.0 | 0.0 |
| 0.2343 | 561.0 | 1683 | 0.2330 | 0.0 | 0.0 |
| 0.234 | 562.0 | 1686 | 0.2330 | 0.0 | 0.0 |
| 0.2335 | 563.0 | 1689 | 0.2330 | 0.0 | 0.0 |
| 0.2333 | 564.0 | 1692 | 0.2330 | 0.0 | 0.0 |
| 0.2334 | 565.0 | 1695 | 0.2329 | 0.0 | 0.0 |
| 0.2337 | 566.0 | 1698 | 0.2329 | 0.0 | 0.0 |
| 0.2344 | 567.0 | 1701 | 0.2329 | 0.0 | 0.0 |
| 0.2331 | 568.0 | 1704 | 0.2329 | 0.0 | 0.0 |
| 0.2338 | 569.0 | 1707 | 0.2329 | 0.0 | 0.0 |
| 0.2331 | 570.0 | 1710 | 0.2328 | 0.0 | 0.0 |
| 0.234 | 571.0 | 1713 | 0.2328 | 0.0 | 0.0 |
| 0.2334 | 572.0 | 1716 | 0.2328 | 0.0 | 0.0 |
| 0.2336 | 573.0 | 1719 | 0.2328 | 0.0 | 0.0 |
| 0.2334 | 574.0 | 1722 | 0.2328 | 0.0 | 0.0 |
| 0.2332 | 575.0 | 1725 | 0.2327 | 0.0 | 0.0 |
| 0.2339 | 576.0 | 1728 | 0.2327 | 0.0 | 0.0 |
| 0.2338 | 577.0 | 1731 | 0.2327 | 0.0 | 0.0 |
| 0.2333 | 578.0 | 1734 | 0.2327 | 0.0 | 0.0 |
| 0.2334 | 579.0 | 1737 | 0.2327 | 0.0 | 0.0 |
| 0.2335 | 580.0 | 1740 | 0.2326 | 0.0 | 0.0 |
| 0.2345 | 581.0 | 1743 | 0.2326 | 0.0 | 0.0 |
| 0.233 | 582.0 | 1746 | 0.2326 | 0.0 | 0.0 |
| 0.233 | 583.0 | 1749 | 0.2326 | 0.0 | 0.0 |
| 0.2342 | 584.0 | 1752 | 0.2326 | 0.0 | 0.0 |
| 0.2322 | 585.0 | 1755 | 0.2325 | 0.0 | 0.0 |
| 0.2335 | 586.0 | 1758 | 0.2325 | 0.0 | 0.0 |
| 0.2329 | 587.0 | 1761 | 0.2325 | 0.0 | 0.0 |
| 0.2332 | 588.0 | 1764 | 0.2325 | 0.0 | 0.0 |
| 0.2327 | 589.0 | 1767 | 0.2325 | 0.0 | 0.0 |
| 0.2325 | 590.0 | 1770 | 0.2324 | 0.0 | 0.0 |
| 0.2332 | 591.0 | 1773 | 0.2324 | 0.0 | 0.0 |
| 0.2328 | 592.0 | 1776 | 0.2324 | 0.0 | 0.0 |
| 0.2328 | 593.0 | 1779 | 0.2324 | 0.0 | 0.0 |
| 0.2327 | 594.0 | 1782 | 0.2324 | 0.0 | 0.0 |
| 0.2325 | 595.0 | 1785 | 0.2324 | 0.0 | 0.0 |
| 0.2325 | 596.0 | 1788 | 0.2323 | 0.0 | 0.0 |
| 0.2321 | 597.0 | 1791 | 0.2323 | 0.0 | 0.0 |
| 0.2327 | 598.0 | 1794 | 0.2323 | 0.0 | 0.0 |
| 0.2333 | 599.0 | 1797 | 0.2323 | 0.0 | 0.0 |
| 0.2338 | 600.0 | 1800 | 0.2323 | 0.0 | 0.0 |
| 0.2326 | 601.0 | 1803 | 0.2323 | 0.0 | 0.0 |
| 0.2333 | 602.0 | 1806 | 0.2322 | 0.0 | 0.0 |
| 0.2329 | 603.0 | 1809 | 0.2322 | 0.0 | 0.0 |
| 0.2327 | 604.0 | 1812 | 0.2322 | 0.0 | 0.0 |
| 0.2326 | 605.0 | 1815 | 0.2322 | 0.0 | 0.0 |
| 0.2325 | 606.0 | 1818 | 0.2322 | 0.0 | 0.0 |
| 0.2322 | 607.0 | 1821 | 0.2322 | 0.0 | 0.0 |
| 0.2321 | 608.0 | 1824 | 0.2321 | 0.0 | 0.0 |
| 0.2332 | 609.0 | 1827 | 0.2321 | 0.0 | 0.0 |
| 0.2325 | 610.0 | 1830 | 0.2321 | 0.0 | 0.0 |
| 0.2332 | 611.0 | 1833 | 0.2321 | 0.0 | 0.0 |
| 0.2329 | 612.0 | 1836 | 0.2321 | 0.0 | 0.0 |
| 0.2327 | 613.0 | 1839 | 0.2321 | 0.0 | 0.0 |
| 0.2324 | 614.0 | 1842 | 0.2320 | 0.0 | 0.0 |
| 0.2322 | 615.0 | 1845 | 0.2320 | 0.0 | 0.0 |
| 0.2327 | 616.0 | 1848 | 0.2320 | 0.0 | 0.0 |
| 0.2326 | 617.0 | 1851 | 0.2320 | 0.0 | 0.0 |
| 0.2331 | 618.0 | 1854 | 0.2320 | 0.0 | 0.0 |
| 0.2329 | 619.0 | 1857 | 0.2320 | 0.0 | 0.0 |
| 0.232 | 620.0 | 1860 | 0.2319 | 0.0 | 0.0 |
| 0.2321 | 621.0 | 1863 | 0.2319 | 0.0 | 0.0 |
| 0.2324 | 622.0 | 1866 | 0.2319 | 0.0 | 0.0 |
| 0.2325 | 623.0 | 1869 | 0.2319 | 0.0 | 0.0 |
| 0.2324 | 624.0 | 1872 | 0.2319 | 0.0 | 0.0 |
| 0.233 | 625.0 | 1875 | 0.2319 | 0.0 | 0.0 |
| 0.2316 | 626.0 | 1878 | 0.2319 | 0.0 | 0.0 |
| 0.2324 | 627.0 | 1881 | 0.2318 | 0.0 | 0.0 |
| 0.2326 | 628.0 | 1884 | 0.2318 | 0.0 | 0.0 |
| 0.2323 | 629.0 | 1887 | 0.2318 | 0.0 | 0.0 |
| 0.2322 | 630.0 | 1890 | 0.2318 | 0.0 | 0.0 |
| 0.2331 | 631.0 | 1893 | 0.2318 | 0.0 | 0.0 |
| 0.2321 | 632.0 | 1896 | 0.2318 | 0.0 | 0.0 |
| 0.2325 | 633.0 | 1899 | 0.2318 | 0.0 | 0.0 |
| 0.2322 | 634.0 | 1902 | 0.2317 | 0.0 | 0.0 |
| 0.2331 | 635.0 | 1905 | 0.2317 | 0.0 | 0.0 |
| 0.2322 | 636.0 | 1908 | 0.2317 | 0.0 | 0.0 |
| 0.2334 | 637.0 | 1911 | 0.2317 | 0.0 | 0.0 |
| 0.2319 | 638.0 | 1914 | 0.2317 | 0.0 | 0.0 |
| 0.2319 | 639.0 | 1917 | 0.2317 | 0.0 | 0.0 |
| 0.2329 | 640.0 | 1920 | 0.2317 | 0.0 | 0.0 |
| 0.2317 | 641.0 | 1923 | 0.2316 | 0.0 | 0.0 |
| 0.2324 | 642.0 | 1926 | 0.2316 | 0.0 | 0.0 |
| 0.2325 | 643.0 | 1929 | 0.2316 | 0.0 | 0.0 |
| 0.2318 | 644.0 | 1932 | 0.2316 | 0.0 | 0.0 |
| 0.2326 | 645.0 | 1935 | 0.2316 | 0.0 | 0.0 |
| 0.2325 | 646.0 | 1938 | 0.2316 | 0.0 | 0.0 |
| 0.232 | 647.0 | 1941 | 0.2316 | 0.0 | 0.0 |
| 0.2321 | 648.0 | 1944 | 0.2316 | 0.0 | 0.0 |
| 0.2322 | 649.0 | 1947 | 0.2315 | 0.0 | 0.0 |
| 0.2322 | 650.0 | 1950 | 0.2315 | 0.0 | 0.0 |
| 0.2321 | 651.0 | 1953 | 0.2315 | 0.0 | 0.0 |
| 0.2317 | 652.0 | 1956 | 0.2315 | 0.0 | 0.0 |
| 0.2324 | 653.0 | 1959 | 0.2315 | 0.0 | 0.0 |
| 0.2324 | 654.0 | 1962 | 0.2315 | 0.0 | 0.0 |
| 0.2312 | 655.0 | 1965 | 0.2315 | 0.0 | 0.0 |
| 0.2323 | 656.0 | 1968 | 0.2315 | 0.0 | 0.0 |
| 0.2321 | 657.0 | 1971 | 0.2314 | 0.0 | 0.0 |
| 0.232 | 658.0 | 1974 | 0.2314 | 0.0 | 0.0 |
| 0.2314 | 659.0 | 1977 | 0.2314 | 0.0 | 0.0 |
| 0.2329 | 660.0 | 1980 | 0.2314 | 0.0 | 0.0 |
| 0.232 | 661.0 | 1983 | 0.2314 | 0.0 | 0.0 |
| 0.2319 | 662.0 | 1986 | 0.2314 | 0.0 | 0.0 |
| 0.2319 | 663.0 | 1989 | 0.2314 | 0.0 | 0.0 |
| 0.2317 | 664.0 | 1992 | 0.2314 | 0.0 | 0.0 |
| 0.2314 | 665.0 | 1995 | 0.2314 | 0.0 | 0.0 |
| 0.2312 | 666.0 | 1998 | 0.2313 | 0.0 | 0.0 |
| 0.2326 | 667.0 | 2001 | 0.2313 | 0.0 | 0.0 |
| 0.2321 | 668.0 | 2004 | 0.2313 | 0.0 | 0.0 |
| 0.2319 | 669.0 | 2007 | 0.2313 | 0.0 | 0.0 |
| 0.2326 | 670.0 | 2010 | 0.2313 | 0.0 | 0.0 |
| 0.2313 | 671.0 | 2013 | 0.2313 | 0.0 | 0.0 |
| 0.2321 | 672.0 | 2016 | 0.2313 | 0.0 | 0.0 |
| 0.2318 | 673.0 | 2019 | 0.2313 | 0.0 | 0.0 |
| 0.2314 | 674.0 | 2022 | 0.2312 | 0.0 | 0.0 |
| 0.2317 | 675.0 | 2025 | 0.2312 | 0.0 | 0.0 |
| 0.2328 | 676.0 | 2028 | 0.2312 | 0.0 | 0.0 |
| 0.2317 | 677.0 | 2031 | 0.2312 | 0.0 | 0.0 |
| 0.2321 | 678.0 | 2034 | 0.2312 | 0.0 | 0.0 |
| 0.232 | 679.0 | 2037 | 0.2312 | 0.0 | 0.0 |
| 0.232 | 680.0 | 2040 | 0.2312 | 0.0 | 0.0 |
| 0.2319 | 681.0 | 2043 | 0.2312 | 0.0 | 0.0 |
| 0.2311 | 682.0 | 2046 | 0.2312 | 0.0 | 0.0 |
| 0.2323 | 683.0 | 2049 | 0.2312 | 0.0 | 0.0 |
| 0.2315 | 684.0 | 2052 | 0.2311 | 0.0 | 0.0 |
| 0.2321 | 685.0 | 2055 | 0.2311 | 0.0 | 0.0 |
| 0.2307 | 686.0 | 2058 | 0.2311 | 0.0 | 0.0 |
| 0.2311 | 687.0 | 2061 | 0.2311 | 0.0 | 0.0 |
| 0.2307 | 688.0 | 2064 | 0.2311 | 0.0 | 0.0 |
| 0.2317 | 689.0 | 2067 | 0.2311 | 0.0 | 0.0 |
| 0.2318 | 690.0 | 2070 | 0.2311 | 0.0 | 0.0 |
| 0.2316 | 691.0 | 2073 | 0.2311 | 0.0 | 0.0 |
| 0.233 | 692.0 | 2076 | 0.2311 | 0.0 | 0.0 |
| 0.2324 | 693.0 | 2079 | 0.2311 | 0.0 | 0.0 |
| 0.2306 | 694.0 | 2082 | 0.2310 | 0.0 | 0.0 |
| 0.2313 | 695.0 | 2085 | 0.2310 | 0.0 | 0.0 |
| 0.2311 | 696.0 | 2088 | 0.2310 | 0.0 | 0.0 |
| 0.2313 | 697.0 | 2091 | 0.2310 | 0.0 | 0.0 |
| 0.2313 | 698.0 | 2094 | 0.2310 | 0.0 | 0.0 |
| 0.2317 | 699.0 | 2097 | 0.2310 | 0.0 | 0.0 |
| 0.2306 | 700.0 | 2100 | 0.2310 | 0.0 | 0.0 |
| 0.232 | 701.0 | 2103 | 0.2310 | 0.0 | 0.0 |
| 0.2312 | 702.0 | 2106 | 0.2310 | 0.0 | 0.0 |
| 0.2319 | 703.0 | 2109 | 0.2310 | 0.0 | 0.0 |
| 0.2314 | 704.0 | 2112 | 0.2310 | 0.0 | 0.0 |
| 0.2311 | 705.0 | 2115 | 0.2309 | 0.0 | 0.0 |
| 0.2313 | 706.0 | 2118 | 0.2309 | 0.0 | 0.0 |
| 0.2309 | 707.0 | 2121 | 0.2309 | 0.0 | 0.0 |
| 0.2318 | 708.0 | 2124 | 0.2309 | 0.0 | 0.0 |
| 0.2307 | 709.0 | 2127 | 0.2309 | 0.0 | 0.0 |
| 0.2312 | 710.0 | 2130 | 0.2309 | 0.0 | 0.0 |
| 0.2307 | 711.0 | 2133 | 0.2309 | 0.0 | 0.0 |
| 0.2318 | 712.0 | 2136 | 0.2309 | 0.0 | 0.0 |
| 0.2314 | 713.0 | 2139 | 0.2309 | 0.0 | 0.0 |
| 0.2322 | 714.0 | 2142 | 0.2309 | 0.0 | 0.0 |
| 0.2319 | 715.0 | 2145 | 0.2309 | 0.0 | 0.0 |
| 0.231 | 716.0 | 2148 | 0.2308 | 0.0 | 0.0 |
| 0.2316 | 717.0 | 2151 | 0.2308 | 0.0 | 0.0 |
| 0.2312 | 718.0 | 2154 | 0.2308 | 0.0 | 0.0 |
| 0.2308 | 719.0 | 2157 | 0.2308 | 0.0 | 0.0 |
| 0.2318 | 720.0 | 2160 | 0.2308 | 0.0 | 0.0 |
| 0.2314 | 721.0 | 2163 | 0.2308 | 0.0 | 0.0 |
| 0.2312 | 722.0 | 2166 | 0.2308 | 0.0 | 0.0 |
| 0.2305 | 723.0 | 2169 | 0.2308 | 0.0 | 0.0 |
| 0.2312 | 724.0 | 2172 | 0.2308 | 0.0 | 0.0 |
| 0.2311 | 725.0 | 2175 | 0.2308 | 0.0 | 0.0 |
| 0.2316 | 726.0 | 2178 | 0.2308 | 0.0 | 0.0 |
| 0.2309 | 727.0 | 2181 | 0.2308 | 0.0 | 0.0 |
| 0.2311 | 728.0 | 2184 | 0.2307 | 0.0 | 0.0 |
| 0.2313 | 729.0 | 2187 | 0.2307 | 0.0 | 0.0 |
| 0.2308 | 730.0 | 2190 | 0.2307 | 0.0 | 0.0 |
| 0.2314 | 731.0 | 2193 | 0.2307 | 0.0 | 0.0 |
| 0.2309 | 732.0 | 2196 | 0.2307 | 0.0 | 0.0 |
| 0.2312 | 733.0 | 2199 | 0.2307 | 0.0 | 0.0 |
| 0.2318 | 734.0 | 2202 | 0.2307 | 0.0 | 0.0 |
| 0.2312 | 735.0 | 2205 | 0.2307 | 0.0 | 0.0 |
| 0.2316 | 736.0 | 2208 | 0.2307 | 0.0 | 0.0 |
| 0.2322 | 737.0 | 2211 | 0.2307 | 0.0 | 0.0 |
| 0.2305 | 738.0 | 2214 | 0.2307 | 0.0 | 0.0 |
| 0.2319 | 739.0 | 2217 | 0.2307 | 0.0 | 0.0 |
| 0.2313 | 740.0 | 2220 | 0.2307 | 0.0 | 0.0 |
| 0.2311 | 741.0 | 2223 | 0.2307 | 0.0 | 0.0 |
| 0.231 | 742.0 | 2226 | 0.2306 | 0.0 | 0.0 |
| 0.2312 | 743.0 | 2229 | 0.2306 | 0.0 | 0.0 |
| 0.2317 | 744.0 | 2232 | 0.2306 | 0.0 | 0.0 |
| 0.2312 | 745.0 | 2235 | 0.2306 | 0.0 | 0.0 |
| 0.2313 | 746.0 | 2238 | 0.2306 | 0.0 | 0.0 |
| 0.2318 | 747.0 | 2241 | 0.2306 | 0.0 | 0.0 |
| 0.2313 | 748.0 | 2244 | 0.2306 | 0.0 | 0.0 |
| 0.2298 | 749.0 | 2247 | 0.2306 | 0.0 | 0.0 |
| 0.2323 | 750.0 | 2250 | 0.2306 | 0.0 | 0.0 |
| 0.2326 | 751.0 | 2253 | 0.2306 | 0.0 | 0.0 |
| 0.2315 | 752.0 | 2256 | 0.2306 | 0.0 | 0.0 |
| 0.2297 | 753.0 | 2259 | 0.2306 | 0.0 | 0.0 |
| 0.2305 | 754.0 | 2262 | 0.2306 | 0.0 | 0.0 |
| 0.2312 | 755.0 | 2265 | 0.2306 | 0.0 | 0.0 |
| 0.231 | 756.0 | 2268 | 0.2305 | 0.0 | 0.0 |
| 0.2308 | 757.0 | 2271 | 0.2305 | 0.0 | 0.0 |
| 0.2315 | 758.0 | 2274 | 0.2305 | 0.0 | 0.0 |
| 0.2307 | 759.0 | 2277 | 0.2305 | 0.0 | 0.0 |
| 0.2314 | 760.0 | 2280 | 0.2305 | 0.0 | 0.0 |
| 0.232 | 761.0 | 2283 | 0.2305 | 0.0 | 0.0 |
| 0.2319 | 762.0 | 2286 | 0.2305 | 0.0 | 0.0 |
| 0.2319 | 763.0 | 2289 | 0.2305 | 0.0 | 0.0 |
| 0.2305 | 764.0 | 2292 | 0.2305 | 0.0 | 0.0 |
| 0.2317 | 765.0 | 2295 | 0.2305 | 0.0 | 0.0 |
| 0.2316 | 766.0 | 2298 | 0.2305 | 0.0 | 0.0 |
| 0.2312 | 767.0 | 2301 | 0.2305 | 0.0 | 0.0 |
| 0.2307 | 768.0 | 2304 | 0.2305 | 0.0 | 0.0 |
| 0.2317 | 769.0 | 2307 | 0.2305 | 0.0 | 0.0 |
| 0.2314 | 770.0 | 2310 | 0.2305 | 0.0 | 0.0 |
| 0.2316 | 771.0 | 2313 | 0.2305 | 0.0 | 0.0 |
| 0.2313 | 772.0 | 2316 | 0.2304 | 0.0 | 0.0 |
| 0.2305 | 773.0 | 2319 | 0.2304 | 0.0 | 0.0 |
| 0.2306 | 774.0 | 2322 | 0.2304 | 0.0 | 0.0 |
| 0.2317 | 775.0 | 2325 | 0.2304 | 0.0 | 0.0 |
| 0.2311 | 776.0 | 2328 | 0.2304 | 0.0 | 0.0 |
| 0.2323 | 777.0 | 2331 | 0.2304 | 0.0 | 0.0 |
| 0.2306 | 778.0 | 2334 | 0.2304 | 0.0 | 0.0 |
| 0.2308 | 779.0 | 2337 | 0.2304 | 0.0 | 0.0 |
| 0.231 | 780.0 | 2340 | 0.2304 | 0.0 | 0.0 |
| 0.2307 | 781.0 | 2343 | 0.2304 | 0.0 | 0.0 |
| 0.2316 | 782.0 | 2346 | 0.2304 | 0.0 | 0.0 |
| 0.2301 | 783.0 | 2349 | 0.2304 | 0.0 | 0.0 |
| 0.2313 | 784.0 | 2352 | 0.2304 | 0.0 | 0.0 |
| 0.2316 | 785.0 | 2355 | 0.2304 | 0.0 | 0.0 |
| 0.2312 | 786.0 | 2358 | 0.2304 | 0.0 | 0.0 |
| 0.2309 | 787.0 | 2361 | 0.2304 | 0.0 | 0.0 |
| 0.2308 | 788.0 | 2364 | 0.2304 | 0.0 | 0.0 |
| 0.2302 | 789.0 | 2367 | 0.2304 | 0.0 | 0.0 |
| 0.2309 | 790.0 | 2370 | 0.2303 | 0.0 | 0.0 |
| 0.2306 | 791.0 | 2373 | 0.2303 | 0.0 | 0.0 |
| 0.2319 | 792.0 | 2376 | 0.2303 | 0.0 | 0.0 |
| 0.2308 | 793.0 | 2379 | 0.2303 | 0.0 | 0.0 |
| 0.23 | 794.0 | 2382 | 0.2303 | 0.0 | 0.0 |
| 0.2305 | 795.0 | 2385 | 0.2303 | 0.0 | 0.0 |
| 0.2313 | 796.0 | 2388 | 0.2303 | 0.0 | 0.0 |
| 0.231 | 797.0 | 2391 | 0.2303 | 0.0 | 0.0 |
| 0.2302 | 798.0 | 2394 | 0.2303 | 0.0 | 0.0 |
| 0.2311 | 799.0 | 2397 | 0.2303 | 0.0 | 0.0 |
| 0.2311 | 800.0 | 2400 | 0.2303 | 0.0 | 0.0 |
| 0.2304 | 801.0 | 2403 | 0.2303 | 0.0 | 0.0 |
| 0.2312 | 802.0 | 2406 | 0.2303 | 0.0 | 0.0 |
| 0.2306 | 803.0 | 2409 | 0.2303 | 0.0 | 0.0 |
| 0.2298 | 804.0 | 2412 | 0.2303 | 0.0 | 0.0 |
| 0.2301 | 805.0 | 2415 | 0.2303 | 0.0 | 0.0 |
| 0.2312 | 806.0 | 2418 | 0.2303 | 0.0 | 0.0 |
| 0.2313 | 807.0 | 2421 | 0.2303 | 0.0 | 0.0 |
| 0.2314 | 808.0 | 2424 | 0.2303 | 0.0 | 0.0 |
| 0.2304 | 809.0 | 2427 | 0.2303 | 0.0 | 0.0 |
| 0.2303 | 810.0 | 2430 | 0.2303 | 0.0 | 0.0 |
| 0.2302 | 811.0 | 2433 | 0.2302 | 0.0 | 0.0 |
| 0.2307 | 812.0 | 2436 | 0.2302 | 0.0 | 0.0 |
| 0.2307 | 813.0 | 2439 | 0.2302 | 0.0 | 0.0 |
| 0.2312 | 814.0 | 2442 | 0.2302 | 0.0 | 0.0 |
| 0.2309 | 815.0 | 2445 | 0.2302 | 0.0 | 0.0 |
| 0.2311 | 816.0 | 2448 | 0.2302 | 0.0 | 0.0 |
| 0.2305 | 817.0 | 2451 | 0.2302 | 0.0 | 0.0 |
| 0.2307 | 818.0 | 2454 | 0.2302 | 0.0 | 0.0 |
| 0.2317 | 819.0 | 2457 | 0.2302 | 0.0 | 0.0 |
| 0.2304 | 820.0 | 2460 | 0.2302 | 0.0 | 0.0 |
| 0.2312 | 821.0 | 2463 | 0.2302 | 0.0 | 0.0 |
| 0.2309 | 822.0 | 2466 | 0.2302 | 0.0 | 0.0 |
| 0.2311 | 823.0 | 2469 | 0.2302 | 0.0 | 0.0 |
| 0.2306 | 824.0 | 2472 | 0.2302 | 0.0 | 0.0 |
| 0.231 | 825.0 | 2475 | 0.2302 | 0.0 | 0.0 |
| 0.2311 | 826.0 | 2478 | 0.2302 | 0.0 | 0.0 |
| 0.2311 | 827.0 | 2481 | 0.2302 | 0.0 | 0.0 |
| 0.2313 | 828.0 | 2484 | 0.2302 | 0.0 | 0.0 |
| 0.2312 | 829.0 | 2487 | 0.2302 | 0.0 | 0.0 |
| 0.2308 | 830.0 | 2490 | 0.2302 | 0.0 | 0.0 |
| 0.2306 | 831.0 | 2493 | 0.2302 | 0.0 | 0.0 |
| 0.2305 | 832.0 | 2496 | 0.2302 | 0.0 | 0.0 |
| 0.2301 | 833.0 | 2499 | 0.2302 | 0.0 | 0.0 |
| 0.2307 | 834.0 | 2502 | 0.2301 | 0.0 | 0.0 |
| 0.2304 | 835.0 | 2505 | 0.2301 | 0.0 | 0.0 |
| 0.2298 | 836.0 | 2508 | 0.2301 | 0.0 | 0.0 |
| 0.2318 | 837.0 | 2511 | 0.2301 | 0.0 | 0.0 |
| 0.23 | 838.0 | 2514 | 0.2301 | 0.0 | 0.0 |
| 0.2307 | 839.0 | 2517 | 0.2301 | 0.0 | 0.0 |
| 0.231 | 840.0 | 2520 | 0.2301 | 0.0 | 0.0 |
| 0.2316 | 841.0 | 2523 | 0.2301 | 0.0 | 0.0 |
| 0.2303 | 842.0 | 2526 | 0.2301 | 0.0 | 0.0 |
| 0.231 | 843.0 | 2529 | 0.2301 | 0.0 | 0.0 |
| 0.2306 | 844.0 | 2532 | 0.2301 | 0.0 | 0.0 |
| 0.2306 | 845.0 | 2535 | 0.2301 | 0.0 | 0.0 |
| 0.2307 | 846.0 | 2538 | 0.2301 | 0.0 | 0.0 |
| 0.2304 | 847.0 | 2541 | 0.2301 | 0.0 | 0.0 |
| 0.2307 | 848.0 | 2544 | 0.2301 | 0.0 | 0.0 |
| 0.2315 | 849.0 | 2547 | 0.2301 | 0.0 | 0.0 |
| 0.2312 | 850.0 | 2550 | 0.2301 | 0.0 | 0.0 |
| 0.2311 | 851.0 | 2553 | 0.2301 | 0.0 | 0.0 |
| 0.2304 | 852.0 | 2556 | 0.2301 | 0.0 | 0.0 |
| 0.2311 | 853.0 | 2559 | 0.2301 | 0.0 | 0.0 |
| 0.2298 | 854.0 | 2562 | 0.2301 | 0.0 | 0.0 |
| 0.2302 | 855.0 | 2565 | 0.2301 | 0.0 | 0.0 |
| 0.23 | 856.0 | 2568 | 0.2301 | 0.0 | 0.0 |
| 0.2305 | 857.0 | 2571 | 0.2301 | 0.0 | 0.0 |
| 0.2305 | 858.0 | 2574 | 0.2301 | 0.0 | 0.0 |
| 0.2308 | 859.0 | 2577 | 0.2301 | 0.0 | 0.0 |
| 0.2299 | 860.0 | 2580 | 0.2301 | 0.0 | 0.0 |
| 0.2309 | 861.0 | 2583 | 0.2301 | 0.0 | 0.0 |
| 0.2304 | 862.0 | 2586 | 0.2300 | 0.0 | 0.0 |
| 0.2309 | 863.0 | 2589 | 0.2300 | 0.0 | 0.0 |
| 0.2309 | 864.0 | 2592 | 0.2300 | 0.0 | 0.0 |
| 0.2298 | 865.0 | 2595 | 0.2300 | 0.0 | 0.0 |
| 0.2303 | 866.0 | 2598 | 0.2300 | 0.0 | 0.0 |
| 0.2299 | 867.0 | 2601 | 0.2300 | 0.0 | 0.0 |
| 0.2309 | 868.0 | 2604 | 0.2300 | 0.0 | 0.0 |
| 0.2301 | 869.0 | 2607 | 0.2300 | 0.0 | 0.0 |
| 0.2303 | 870.0 | 2610 | 0.2300 | 0.0 | 0.0 |
| 0.23 | 871.0 | 2613 | 0.2300 | 0.0 | 0.0 |
| 0.2306 | 872.0 | 2616 | 0.2300 | 0.0 | 0.0 |
| 0.2308 | 873.0 | 2619 | 0.2300 | 0.0 | 0.0 |
| 0.2315 | 874.0 | 2622 | 0.2300 | 0.0 | 0.0 |
| 0.2316 | 875.0 | 2625 | 0.2300 | 0.0 | 0.0 |
| 0.2308 | 876.0 | 2628 | 0.2300 | 0.0 | 0.0 |
| 0.2309 | 877.0 | 2631 | 0.2300 | 0.0 | 0.0 |
| 0.2302 | 878.0 | 2634 | 0.2300 | 0.0 | 0.0 |
| 0.2308 | 879.0 | 2637 | 0.2300 | 0.0 | 0.0 |
| 0.23 | 880.0 | 2640 | 0.2300 | 0.0 | 0.0 |
| 0.231 | 881.0 | 2643 | 0.2300 | 0.0 | 0.0 |
| 0.2305 | 882.0 | 2646 | 0.2300 | 0.0 | 0.0 |
| 0.2304 | 883.0 | 2649 | 0.2300 | 0.0 | 0.0 |
| 0.2309 | 884.0 | 2652 | 0.2300 | 0.0 | 0.0 |
| 0.2302 | 885.0 | 2655 | 0.2300 | 0.0 | 0.0 |
| 0.2309 | 886.0 | 2658 | 0.2300 | 0.0 | 0.0 |
| 0.23 | 887.0 | 2661 | 0.2300 | 0.0 | 0.0 |
| 0.2313 | 888.0 | 2664 | 0.2300 | 0.0 | 0.0 |
| 0.2315 | 889.0 | 2667 | 0.2300 | 0.0 | 0.0 |
| 0.2299 | 890.0 | 2670 | 0.2300 | 0.0 | 0.0 |
| 0.23 | 891.0 | 2673 | 0.2300 | 0.0 | 0.0 |
| 0.2304 | 892.0 | 2676 | 0.2300 | 0.0 | 0.0 |
| 0.2309 | 893.0 | 2679 | 0.2300 | 0.0 | 0.0 |
| 0.2307 | 894.0 | 2682 | 0.2300 | 0.0 | 0.0 |
| 0.2307 | 895.0 | 2685 | 0.2300 | 0.0 | 0.0 |
| 0.2312 | 896.0 | 2688 | 0.2300 | 0.0 | 0.0 |
| 0.2302 | 897.0 | 2691 | 0.2300 | 0.0 | 0.0 |
| 0.2309 | 898.0 | 2694 | 0.2300 | 0.0 | 0.0 |
| 0.2303 | 899.0 | 2697 | 0.2299 | 0.0 | 0.0 |
| 0.2315 | 900.0 | 2700 | 0.2299 | 0.0 | 0.0 |
| 0.2311 | 901.0 | 2703 | 0.2299 | 0.0 | 0.0 |
| 0.23 | 902.0 | 2706 | 0.2299 | 0.0 | 0.0 |
| 0.2307 | 903.0 | 2709 | 0.2299 | 0.0 | 0.0 |
| 0.2305 | 904.0 | 2712 | 0.2299 | 0.0 | 0.0 |
| 0.2313 | 905.0 | 2715 | 0.2299 | 0.0 | 0.0 |
| 0.2304 | 906.0 | 2718 | 0.2299 | 0.0 | 0.0 |
| 0.2305 | 907.0 | 2721 | 0.2299 | 0.0 | 0.0 |
| 0.2304 | 908.0 | 2724 | 0.2299 | 0.0 | 0.0 |
| 0.231 | 909.0 | 2727 | 0.2299 | 0.0 | 0.0 |
| 0.2303 | 910.0 | 2730 | 0.2299 | 0.0 | 0.0 |
| 0.2303 | 911.0 | 2733 | 0.2299 | 0.0 | 0.0 |
| 0.2307 | 912.0 | 2736 | 0.2299 | 0.0 | 0.0 |
| 0.2306 | 913.0 | 2739 | 0.2299 | 0.0 | 0.0 |
| 0.2308 | 914.0 | 2742 | 0.2299 | 0.0 | 0.0 |
| 0.2299 | 915.0 | 2745 | 0.2299 | 0.0 | 0.0 |
| 0.2307 | 916.0 | 2748 | 0.2299 | 0.0 | 0.0 |
| 0.2308 | 917.0 | 2751 | 0.2299 | 0.0 | 0.0 |
| 0.2304 | 918.0 | 2754 | 0.2299 | 0.0 | 0.0 |
| 0.231 | 919.0 | 2757 | 0.2299 | 0.0 | 0.0 |
| 0.2308 | 920.0 | 2760 | 0.2299 | 0.0 | 0.0 |
| 0.23 | 921.0 | 2763 | 0.2299 | 0.0 | 0.0 |
| 0.2305 | 922.0 | 2766 | 0.2299 | 0.0 | 0.0 |
| 0.2301 | 923.0 | 2769 | 0.2299 | 0.0 | 0.0 |
| 0.2299 | 924.0 | 2772 | 0.2299 | 0.0 | 0.0 |
| 0.2302 | 925.0 | 2775 | 0.2299 | 0.0 | 0.0 |
| 0.2313 | 926.0 | 2778 | 0.2299 | 0.0 | 0.0 |
| 0.2303 | 927.0 | 2781 | 0.2299 | 0.0 | 0.0 |
| 0.2306 | 928.0 | 2784 | 0.2299 | 0.0 | 0.0 |
| 0.2306 | 929.0 | 2787 | 0.2299 | 0.0 | 0.0 |
| 0.2301 | 930.0 | 2790 | 0.2299 | 0.0 | 0.0 |
| 0.2309 | 931.0 | 2793 | 0.2299 | 0.0 | 0.0 |
| 0.2302 | 932.0 | 2796 | 0.2299 | 0.0 | 0.0 |
| 0.231 | 933.0 | 2799 | 0.2299 | 0.0 | 0.0 |
| 0.23 | 934.0 | 2802 | 0.2299 | 0.0 | 0.0 |
| 0.2296 | 935.0 | 2805 | 0.2299 | 0.0 | 0.0 |
| 0.2305 | 936.0 | 2808 | 0.2299 | 0.0 | 0.0 |
| 0.2299 | 937.0 | 2811 | 0.2299 | 0.0 | 0.0 |
| 0.2304 | 938.0 | 2814 | 0.2299 | 0.0 | 0.0 |
| 0.2307 | 939.0 | 2817 | 0.2299 | 0.0 | 0.0 |
| 0.2307 | 940.0 | 2820 | 0.2299 | 0.0 | 0.0 |
| 0.2299 | 941.0 | 2823 | 0.2299 | 0.0 | 0.0 |
| 0.2306 | 942.0 | 2826 | 0.2299 | 0.0 | 0.0 |
| 0.2302 | 943.0 | 2829 | 0.2299 | 0.0 | 0.0 |
| 0.2309 | 944.0 | 2832 | 0.2299 | 0.0 | 0.0 |
| 0.2308 | 945.0 | 2835 | 0.2299 | 0.0 | 0.0 |
| 0.2308 | 946.0 | 2838 | 0.2299 | 0.0 | 0.0 |
| 0.2301 | 947.0 | 2841 | 0.2299 | 0.0 | 0.0 |
| 0.2302 | 948.0 | 2844 | 0.2299 | 0.0 | 0.0 |
| 0.231 | 949.0 | 2847 | 0.2299 | 0.0 | 0.0 |
| 0.2308 | 950.0 | 2850 | 0.2299 | 0.0 | 0.0 |
| 0.2309 | 951.0 | 2853 | 0.2299 | 0.0 | 0.0 |
| 0.2303 | 952.0 | 2856 | 0.2299 | 0.0 | 0.0 |
| 0.2301 | 953.0 | 2859 | 0.2299 | 0.0 | 0.0 |
| 0.2311 | 954.0 | 2862 | 0.2299 | 0.0 | 0.0 |
| 0.2308 | 955.0 | 2865 | 0.2299 | 0.0 | 0.0 |
| 0.2307 | 956.0 | 2868 | 0.2299 | 0.0 | 0.0 |
| 0.2299 | 957.0 | 2871 | 0.2299 | 0.0 | 0.0 |
| 0.2299 | 958.0 | 2874 | 0.2299 | 0.0 | 0.0 |
| 0.2309 | 959.0 | 2877 | 0.2299 | 0.0 | 0.0 |
| 0.2304 | 960.0 | 2880 | 0.2299 | 0.0 | 0.0 |
| 0.231 | 961.0 | 2883 | 0.2299 | 0.0 | 0.0 |
| 0.2299 | 962.0 | 2886 | 0.2299 | 0.0 | 0.0 |
| 0.2307 | 963.0 | 2889 | 0.2298 | 0.0 | 0.0 |
| 0.2303 | 964.0 | 2892 | 0.2298 | 0.0 | 0.0 |
| 0.2303 | 965.0 | 2895 | 0.2298 | 0.0 | 0.0 |
| 0.2301 | 966.0 | 2898 | 0.2298 | 0.0 | 0.0 |
| 0.2299 | 967.0 | 2901 | 0.2298 | 0.0 | 0.0 |
| 0.2301 | 968.0 | 2904 | 0.2298 | 0.0 | 0.0 |
| 0.2308 | 969.0 | 2907 | 0.2298 | 0.0 | 0.0 |
| 0.23 | 970.0 | 2910 | 0.2298 | 0.0 | 0.0 |
| 0.2305 | 971.0 | 2913 | 0.2298 | 0.0 | 0.0 |
| 0.2306 | 972.0 | 2916 | 0.2298 | 0.0 | 0.0 |
| 0.2309 | 973.0 | 2919 | 0.2298 | 0.0 | 0.0 |
| 0.2314 | 974.0 | 2922 | 0.2298 | 0.0 | 0.0 |
| 0.2305 | 975.0 | 2925 | 0.2298 | 0.0 | 0.0 |
| 0.2305 | 976.0 | 2928 | 0.2298 | 0.0 | 0.0 |
| 0.2303 | 977.0 | 2931 | 0.2298 | 0.0 | 0.0 |
| 0.23 | 978.0 | 2934 | 0.2298 | 0.0 | 0.0 |
| 0.2303 | 979.0 | 2937 | 0.2298 | 0.0 | 0.0 |
| 0.2302 | 980.0 | 2940 | 0.2298 | 0.0 | 0.0 |
| 0.2296 | 981.0 | 2943 | 0.2298 | 0.0 | 0.0 |
| 0.2299 | 982.0 | 2946 | 0.2298 | 0.0 | 0.0 |
| 0.2305 | 983.0 | 2949 | 0.2298 | 0.0 | 0.0 |
| 0.2305 | 984.0 | 2952 | 0.2298 | 0.0 | 0.0 |
| 0.2306 | 985.0 | 2955 | 0.2298 | 0.0 | 0.0 |
| 0.2297 | 986.0 | 2958 | 0.2298 | 0.0 | 0.0 |
| 0.23 | 987.0 | 2961 | 0.2298 | 0.0 | 0.0 |
| 0.2302 | 988.0 | 2964 | 0.2298 | 0.0 | 0.0 |
| 0.23 | 989.0 | 2967 | 0.2298 | 0.0 | 0.0 |
| 0.2305 | 990.0 | 2970 | 0.2298 | 0.0 | 0.0 |
| 0.2309 | 991.0 | 2973 | 0.2298 | 0.0 | 0.0 |
| 0.2298 | 992.0 | 2976 | 0.2298 | 0.0 | 0.0 |
| 0.2295 | 993.0 | 2979 | 0.2298 | 0.0 | 0.0 |
| 0.2296 | 994.0 | 2982 | 0.2298 | 0.0 | 0.0 |
| 0.2309 | 995.0 | 2985 | 0.2298 | 0.0 | 0.0 |
| 0.231 | 996.0 | 2988 | 0.2298 | 0.0 | 0.0 |
| 0.2297 | 997.0 | 2991 | 0.2298 | 0.0 | 0.0 |
| 0.2302 | 998.0 | 2994 | 0.2298 | 0.0 | 0.0 |
| 0.2305 | 999.0 | 2997 | 0.2298 | 0.0 | 0.0 |
| 0.2298 | 1000.0 | 3000 | 0.2298 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
CyberHarem/sakurabakoma_edomaeelf | CyberHarem | 2023-09-15T15:12:26Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/sakurabakoma_edomaeelf",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-15T15:00:05Z | ---
license: mit
datasets:
- CyberHarem/sakurabakoma_edomaeelf
pipeline_tag: text-to-image
tags:
- art
---
# Lora of sakurabakoma_edomaeelf
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4760, you need to download `4760/sakurabakoma_edomaeelf.pt` as the embedding and `4760/sakurabakoma_edomaeelf.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4760**, with the score of 0.940. The trigger words are:
1. `sakurabakoma_edomaeelf`
2. `red_hair, twintails, red_eyes, ribbon, neck_ribbon, bangs, red_ribbon, blush`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.908 | [Download](5100/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| **4760** | **0.940** | [**Download**](4760/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.930 | [Download](4420/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.939 | [Download](4080/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.849 | [Download](3740/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.894 | [Download](3400/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.913 | [Download](3060/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.911 | [Download](2720/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.900 | [Download](2380/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.864 | [Download](2040/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.824 | [Download](1700/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.833 | [Download](1360/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.855 | [Download](1020/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.752 | [Download](680/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.754 | [Download](340/sakurabakoma_edomaeelf.zip) |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
bookbot/byt5-small-wikipron-eng-latn | bookbot | 2023-09-15T15:11:51Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-06-05T08:51:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: byt5-small-wikipron-eng-latn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-small-wikipron-eng-latn
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1898
- Per: 0.3272
- Gen Len: 16.4158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Per | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.2564 | 1.0 | 230 | 0.4016 | 0.5235 | 15.7351 |
| 0.3856 | 2.0 | 461 | 0.2648 | 0.4189 | 16.3283 |
| 0.2861 | 3.0 | 692 | 0.2248 | 0.3665 | 16.3982 |
| 0.2438 | 4.0 | 923 | 0.2090 | 0.3452 | 16.3591 |
| 0.2207 | 5.0 | 1153 | 0.2015 | 0.3403 | 16.3944 |
| 0.2049 | 6.0 | 1384 | 0.1952 | 0.3342 | 16.4001 |
| 0.193 | 7.0 | 1615 | 0.1908 | 0.3306 | 16.4006 |
| 0.185 | 8.0 | 1846 | 0.1883 | 0.3271 | 16.408 |
| 0.18 | 9.0 | 2076 | 0.1894 | 0.3276 | 16.4194 |
| 0.1751 | 9.97 | 2300 | 0.1898 | 0.3272 | 16.4158 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ostris/face-helper-sdxl-lora | ostris | 2023-09-15T15:06:46Z | 60 | 5 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"photorealism",
"realistic",
"face",
"closeup",
"tool",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
]
| text-to-image | 2023-09-15T15:06:40Z | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- photorealism
- realistic
- face
- closeup
- tool
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt:
widget:
- text: "rick sanchez from rick and morty, lab coat, drooling, drunk "
- text: "miss piggy "
- text: "miss frizzle from the magic school bus "
- text: "spongebob squarepants "
- text: "squidward from squarepants "
- text: "fred flintstone "
- text: "wilma from the flintstones "
- text: "gru from despicable me "
- text: "the grinch , studio lighting, dark background"
- text: "ninja turtles, raphiael "
---
# Face Helper - SDXL LoRA

> rick sanchez from rick and morty, lab coat, drooling, drunk
([CivitAI](https://civitai.com/models/145974))
<ul><li><p>No trigger word needed</p></li><li><p>Only makes faces</p></li><li><p>Weight of 1.0</p></li><li><p>Helps make faces more realistic</p></li><li><p>Good at making fictional characters real people</p></li><li><p>Handles prompting of ages, ethnicity, and physical attributes well</p></li></ul><p></p><p>All samples were generated with Base SDXL 1.0. No refiner / detailers / highres fixes. </p><p></p><p>This LoRA was trained on over 100k high quality, highly labeled faces. It is just a small part of my Humans dataset. More information on that, and the thousands of tokens it has in it, can be found in the description of my <a rel="ugc" href="https://civitai.com/models/98755/humans">Humans</a> model. There are no trigger words and I do not recommend merging this into your model as it only does close up faces, unless that is what you are going for, in which case, go for it. </p><p></p><p>SDXL is amazing, but it is still lacking severely lacking in the ability to make photorealistic humans, especially faces. This was designed to help with that, but it is not perfect. Eyes and teeth are better, but still not at a level I am happy with, but I can only do so much with a LoRA. </p><p></p><p>I have also been training and tuning a full realistic SDXL model based on my full and expanded humans dataset since SDXL 1.0 was released, but it has a long way to go before I will be happy with it. </p>
## Image examples for the model:

> miss piggy

> miss frizzle from the magic school bus

> spongebob squarepants

> squidward from squarepants

> fred flintstone

> wilma from the flintstones

> gru from despicable me

> the grinch , studio lighting, dark background

> ninja turtles, raphiael
|
ys7yoo/nli_sts_klue_roberta_large_ep5_ep5 | ys7yoo | 2023-09-15T15:00:58Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:ys7yoo/nli_klue_roberta_large_ep5",
"base_model:finetune:ys7yoo/nli_klue_roberta_large_ep5",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-15T14:38:53Z | ---
base_model: ys7yoo/nli_klue_roberta_large_ep5
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: sts_ys7yoo_nli_klue_roberta_large_ep5_ep5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sts_ys7yoo_nli_klue_roberta_large_ep5_ep5
This model is a fine-tuned version of [ys7yoo/nli_klue_roberta_large_ep5](https://huggingface.co/ys7yoo/nli_klue_roberta_large_ep5) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3389
- Mse: 0.3389
- Mae: 0.4252
- R2: 0.8448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.838 | 1.0 | 183 | 0.6427 | 0.6427 | 0.6072 | 0.7057 |
| 0.1578 | 2.0 | 366 | 0.3120 | 0.3120 | 0.4220 | 0.8571 |
| 0.1013 | 3.0 | 549 | 0.4612 | 0.4612 | 0.5016 | 0.7888 |
| 0.0676 | 4.0 | 732 | 0.2982 | 0.2982 | 0.3974 | 0.8635 |
| 0.0436 | 5.0 | 915 | 0.3389 | 0.3389 | 0.4252 | 0.8448 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
GabSo/santacoder-finetuned-the-stack-bash | GabSo | 2023-09-15T14:43:21Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:bigcode/santacoder",
"base_model:finetune:bigcode/santacoder",
"license:bigcode-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-10T10:32:16Z | ---
license: bigcode-openrail-m
base_model: bigcode/santacoder
tags:
- generated_from_trainer
model-index:
- name: santacoder-finetuned-the-stack-bash
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# santacoder-finetuned-the-stack-bash
This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.1 | 1 | 1.6955 |
| No log | 0.2 | 2 | 3.6096 |
| No log | 0.3 | 3 | 1.5787 |
| No log | 0.4 | 4 | 1.8131 |
| No log | 0.5 | 5 | 1.0994 |
| No log | 0.6 | 6 | 1.0921 |
| No log | 0.7 | 7 | 0.9509 |
| No log | 0.8 | 8 | 0.8762 |
| No log | 0.9 | 9 | 0.8375 |
| 1.3831 | 1.0 | 10 | 0.8294 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Peter/qloratry | Peter | 2023-09-15T14:38:50Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T14:35:51Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
loicspigeleer/ppo-SnowballTarget | loicspigeleer | 2023-09-15T14:34:25Z | 20 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-09-15T14:34:22Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: loicspigeleer/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bongo2112/sdxl-db-diamondplatnumz-portrait | bongo2112 | 2023-09-15T14:31:38Z | 3 | 2 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-15T14:29:34Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of mwambinonyange man
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
cloudwalkerw/wavlm-base | cloudwalkerw | 2023-09-15T14:30:53Z | 157 | 0 | transformers | [
"transformers",
"pytorch",
"wavlm",
"audio-classification",
"generated_from_trainer",
"base_model:microsoft/wavlm-base",
"base_model:finetune:microsoft/wavlm-base",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-09-15T09:37:37Z | ---
base_model: microsoft/wavlm-base
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wavlm-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-base
This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3307
- Accuracy: 0.8974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 2
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3744 | 1.0 | 793 | 0.3307 | 0.8974 |
| 0.3699 | 2.0 | 1586 | 0.3342 | 0.8974 |
| 0.2898 | 3.0 | 2379 | 0.3341 | 0.8974 |
| 0.3126 | 4.0 | 3173 | 0.3363 | 0.8974 |
| 0.3753 | 5.0 | 3966 | 0.3309 | 0.8974 |
| 0.3617 | 6.0 | 4759 | 0.3325 | 0.8974 |
| 0.3453 | 7.0 | 5552 | 0.3315 | 0.8974 |
| 0.3337 | 8.0 | 6346 | 0.3364 | 0.8974 |
| 0.2829 | 9.0 | 7139 | 0.3327 | 0.8974 |
| 0.3189 | 10.0 | 7930 | 0.3321 | 0.8974 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.0.post302
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/yumemi_riamu_idolmastercinderellagirls | CyberHarem | 2023-09-15T14:24:06Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/yumemi_riamu_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-15T14:08:54Z | ---
license: mit
datasets:
- CyberHarem/yumemi_riamu_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yumemi_riamu_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7000, you need to download `7000/yumemi_riamu_idolmastercinderellagirls.pt` as the embedding and `7000/yumemi_riamu_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7000**, with the score of 0.995. The trigger words are:
1. `yumemi_riamu_idolmastercinderellagirls`
2. `pink_hair, multicolored_hair, two-tone_hair, bangs, blue_hair, pink_eyes, short_hair, ahoge, hair_intakes, blush, breasts, open_mouth, large_breasts, fang, heart, collarbone, jewelry`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7500 | 0.991 | [Download](7500/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](7500/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](7500/previews/bikini.png) | [<NSFW, click to see>](7500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) |  |  |
| **7000** | **0.995** | [**Download**](7000/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](7000/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](7000/previews/bikini.png) | [<NSFW, click to see>](7000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) |  |  |
| 6500 | 0.989 | [Download](6500/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](6500/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](6500/previews/bikini.png) | [<NSFW, click to see>](6500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) |  |  |
| 6000 | 0.980 | [Download](6000/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](6000/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bikini.png) | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5500 | 0.984 | [Download](5500/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5500/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](5500/previews/bikini.png) | [<NSFW, click to see>](5500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) |  |  |
| 5000 | 0.963 | [Download](5000/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5000/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](5000/previews/bikini.png) | [<NSFW, click to see>](5000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) |  |  |
| 4500 | 0.991 | [Download](4500/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4500/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](4500/previews/bikini.png) | [<NSFW, click to see>](4500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) |  |  |
| 4000 | 0.994 | [Download](4000/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4000/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bikini.png) | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| 3500 | 0.993 | [Download](3500/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3500/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](3500/previews/bikini.png) | [<NSFW, click to see>](3500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) |  |  |
| 3000 | 0.991 | [Download](3000/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3000/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](3000/previews/bikini.png) | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2500 | 0.991 | [Download](2500/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2500/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](2500/previews/bikini.png) | [<NSFW, click to see>](2500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) |  |  |
| 2000 | 0.995 | [Download](2000/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2000/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bikini.png) | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1500 | 0.996 | [Download](1500/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1500/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) |  |  |
| 1000 | 0.991 | [Download](1000/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1000/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) |  |  |
| 500 | 0.965 | [Download](500/yumemi_riamu_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](500/previews/pattern_2.png) |  |  |  |  |  | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) |  |  |
|
ankush37/roberta-plagi | ankush37 | 2023-09-15T14:22:24Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-15T12:59:13Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-plagi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-plagi
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6588 | 1.0 | 510 | 0.6471 |
| 0.5803 | 2.0 | 1020 | 0.6280 |
| 0.59 | 3.0 | 1530 | 0.6281 |
| 0.5701 | 4.0 | 2040 | 0.6309 |
| 0.6614 | 5.0 | 2550 | 0.6282 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Q317/EmoraBert | Q317 | 2023-09-15T14:11:17Z | 69 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"base_model:wonrax/phobert-base-vietnamese-sentiment",
"base_model:finetune:wonrax/phobert-base-vietnamese-sentiment",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-15T13:04:03Z | ---
license: mit
base_model: wonrax/phobert-base-vietnamese-sentiment
tags:
- generated_from_keras_callback
model-index:
- name: Q317/EmoraBert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Q317/EmoraBert
This model is a fine-tuned version of [wonrax/phobert-base-vietnamese-sentiment](https://huggingface.co/wonrax/phobert-base-vietnamese-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1216
- Validation Loss: 1.3423
- Train Accuracy: 0.6833
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 220740, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.8611 | 0.7685 | 0.6586 | 0 |
| 0.6951 | 0.7397 | 0.6802 | 1 |
| 0.5578 | 0.7740 | 0.6894 | 2 |
| 0.4277 | 0.8475 | 0.6849 | 3 |
| 0.3222 | 0.9853 | 0.6889 | 4 |
| 0.2376 | 1.0837 | 0.6840 | 5 |
| 0.1982 | 1.1422 | 0.6771 | 6 |
| 0.1618 | 1.2596 | 0.6786 | 7 |
| 0.1341 | 1.3652 | 0.6773 | 8 |
| 0.1216 | 1.3423 | 0.6833 | 9 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
gmongaras/Wizard_7B_Reddit_Political_2019_13B | gmongaras | 2023-09-15T14:11:06Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-15T13:17:20Z | ---
license: openrail
---
Model from: https://huggingface.co/WizardLM/WizardLM-13B-V1.2
Trained on: https://huggingface.co/datasets/gmongaras/reddit_political_2019
For about 18,000 steps with a batch size of 8, 2 accumulation steps, and using LoRA adapters on all layers. |
Subsets and Splits