modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 00:43:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 00:40:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lrakotoson/scitldr-catts-xsum-ao | lrakotoson | 2023-09-18T08:16:53Z | 129 | 9 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bart",
"text2text-generation",
"en",
"dataset:xsum",
"dataset:scitldr",
"arxiv:2004.15011",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
datasets:
- xsum
- scitldr
widget:
- text: "We introduce TLDR generation, a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression and requires expert background knowledge and understanding of complex domain-specific language. To facilitate study on this task, we introduce SciTLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden. We propose CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal. CATTS improves upon strong baselines under both automated metrics and human evaluations."
license: "apache-2.0"
---
# AI2 SciTLDR
Fairseq checkpoints from CATTS XSUM to Transformers BART (Abtract Only)
Original repository: [https://github.com/allenai/scitldr](https://github.com/allenai/scitldr)
## Demo
A running demo of AI2 model can be found [here](https://scitldr.apps.allenai.org).
### Citing
If you use code, dataset, or model weights in your research, please cite "TLDR: Extreme Summarization of Scientific Documents."
```
@article{cachola2020tldr,
title={{TLDR}: Extreme Summarization of Scientific Documents},
author={Isabel Cachola and Kyle Lo and Arman Cohan and Daniel S. Weld},
journal={arXiv:2004.15011},
year={2020},
}
```
SciTLDR is an open-source project developed by the Allen Institute for Artificial Intelligence (AI2). AI2 is a non-profit institute with the mission to contribute to humanity through high-impact AI research and engineering. |
CyberHarem/miyamori_aoi_shirobako | CyberHarem | 2023-09-18T08:14:34Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/miyamori_aoi_shirobako",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-18T07:49:26Z | ---
license: mit
datasets:
- CyberHarem/miyamori_aoi_shirobako
pipeline_tag: text-to-image
tags:
- art
---
# Lora of miyamori_aoi_shirobako
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 8580, you need to download `8580/miyamori_aoi_shirobako.pt` as the embedding and `8580/miyamori_aoi_shirobako.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 8580**, with the score of 0.761. The trigger words are:
1. `miyamori_aoi_shirobako`
2. `short_hair, brown_hair, blue_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | pattern_20 | pattern_21 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9900 | 0.728 | [Download](9900/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9900/previews/nude.png) | [<NSFW, click to see>](9900/previews/nude2.png) |  |  |
| 9240 | 0.599 | [Download](9240/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9240/previews/nude.png) | [<NSFW, click to see>](9240/previews/nude2.png) |  |  |
| **8580** | **0.761** | [**Download**](8580/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8580/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8580/previews/nude.png) | [<NSFW, click to see>](8580/previews/nude2.png) |  |  |
| 7920 | 0.732 | [Download](7920/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7920/previews/nude.png) | [<NSFW, click to see>](7920/previews/nude2.png) |  |  |
| 7260 | 0.715 | [Download](7260/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7260/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7260/previews/nude.png) | [<NSFW, click to see>](7260/previews/nude2.png) |  |  |
| 6600 | 0.720 | [Download](6600/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) |  |  |
| 5940 | 0.708 | [Download](5940/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5280 | 0.742 | [Download](5280/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4620 | 0.752 | [Download](4620/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4620/previews/nude.png) | [<NSFW, click to see>](4620/previews/nude2.png) |  |  |
| 3960 | 0.742 | [Download](3960/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) |  |  |
| 3300 | 0.699 | [Download](3300/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3300/previews/nude.png) | [<NSFW, click to see>](3300/previews/nude2.png) |  |  |
| 2640 | 0.694 | [Download](2640/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2640/previews/nude.png) | [<NSFW, click to see>](2640/previews/nude2.png) |  |  |
| 1980 | 0.684 | [Download](1980/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1980/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1980/previews/nude.png) | [<NSFW, click to see>](1980/previews/nude2.png) |  |  |
| 1320 | 0.614 | [Download](1320/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1320/previews/nude.png) | [<NSFW, click to see>](1320/previews/nude2.png) |  |  |
| 660 | 0.637 | [Download](660/miyamori_aoi_shirobako.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](660/previews/bondage.png) |  |  |  | [<NSFW, click to see>](660/previews/nude.png) | [<NSFW, click to see>](660/previews/nude2.png) |  |  |
|
vdivya/dummy-model | vdivya | 2023-09-18T08:09:13Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T07:57:50Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0608
- Train Accuracy: 0.9804
- Validation Loss: 0.2496
- Validation Accuracy: 0.9140
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 25257, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2262 | 0.9143 | 0.2503 | 0.9094 | 0 |
| 0.1133 | 0.9622 | 0.2515 | 0.9083 | 1 |
| 0.0608 | 0.9804 | 0.2496 | 0.9140 | 2 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kurileo/blip2-opt-6.7b-refines | kurileo | 2023-09-18T08:06:22Z | 4 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T08:04:45Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
anantonios9/distilbert-base-uncased-distilled-clinc | anantonios9 | 2023-09-18T08:05:08Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T07:46:19Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9493548387096774
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2958
- Accuracy: 0.9494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.0462 | 1.0 | 318 | 2.2419 | 0.7339 |
| 1.7248 | 2.0 | 636 | 1.1431 | 0.8674 |
| 0.8983 | 3.0 | 954 | 0.6406 | 0.9148 |
| 0.5162 | 4.0 | 1272 | 0.4438 | 0.9368 |
| 0.3473 | 5.0 | 1590 | 0.3622 | 0.9435 |
| 0.2664 | 6.0 | 1908 | 0.3288 | 0.9461 |
| 0.2256 | 7.0 | 2226 | 0.3150 | 0.9481 |
| 0.2032 | 8.0 | 2544 | 0.3009 | 0.9474 |
| 0.1918 | 9.0 | 2862 | 0.2980 | 0.9474 |
| 0.1855 | 10.0 | 3180 | 0.2958 | 0.9494 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kurileo/blip2-opt-2.7b-refines | kurileo | 2023-09-18T08:03:37Z | 2 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T08:02:34Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
ernestum/ppo-seals-CartPole-v0 | ernestum | 2023-09-18T07:56:54Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/CartPole-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T11:56:08Z | ---
library_name: stable-baselines3
tags:
- seals/CartPole-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/CartPole-v0
type: seals/CartPole-v0
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/CartPole-v0**
This is a trained model of a **PPO** agent playing **seals/CartPole-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/CartPole-v0 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/CartPole-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/CartPole-v0 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/CartPole-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/CartPole-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/CartPole-v0 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 0.4),
('ent_coef', 0.008508727919228772),
('gae_lambda', 0.9),
('gamma', 0.9999),
('learning_rate', 0.0012403278189645594),
('max_grad_norm', 0.8),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 512),
('n_timesteps', 100000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.ReLU'>,
'net_arch': [{'pi': [64, 64], 'vf': [64, 64]}]}),
('vf_coef', 0.489343896591493),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
bene-ges/tts_ru_hifigan_ruslan | bene-ges | 2023-09-18T07:54:29Z | 19 | 6 | nemo | [
"nemo",
"tts",
"text-to-speech",
"Vocoder",
"ru",
"license:cc-by-nc-4.0",
"region:us"
]
| text-to-speech | 2023-04-18T08:05:03Z | ---
license: cc-by-nc-4.0
language:
- ru
library_name: nemo
tags:
- tts
- text-to-speech
- Vocoder
---
### How to use
See example of inference pipeline for Russian TTS (G2P + FastPitch + HifiGAN) in this [notebook](https://github.com/bene-ges/nemo_compatible/blob/main/notebooks/Russian_TTS_with_IPA_G2P_FastPitch_and_HifiGAN.ipynb).
Or use this [bash-script](https://github.com/bene-ges/nemo_compatible/blob/main/scripts/tts/ru_ipa_fastpitch_hifigan/test.sh).
### Input
This model accepts batches of mel spectrograms.
### Output
This model outputs audio at 22050Hz.
## Training
The NeMo toolkit [1] was used for training the model for several epochs.
Full training script is [here](https://github.com/bene-ges/nemo_compatible/blob/main/scripts/tts/ru_ipa_fastpitch_hifigan/train.sh).
### Datasets
This model is trained on [RUSLAN](https://ruslan-corpus.github.io/) [2] corpus (single speaker, male voice) sampled at 22050Hz.
## References
- [1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
- [2] Gabdrakhmanov L., Garaev R., Razinkov E. (2019) RUSLAN: Russian Spoken Language Corpus for Speech Synthesis. In: Salah A., Karpov A., Potapova R. (eds) Speech and Computer. SPECOM 2019. Lecture Notes in Computer Science, vol 11658. Springer, Cham |
ernestum/sac-seals-HalfCheetah-v1 | ernestum | 2023-09-18T07:53:35Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/HalfCheetah-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T11:53:34Z | ---
library_name: stable-baselines3
tags:
- seals/HalfCheetah-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/HalfCheetah-v1
type: seals/HalfCheetah-v1
metrics:
- type: mean_reward
value: 1183.52 +/- 22.65
name: mean_reward
verified: false
---
# **SAC** Agent playing **seals/HalfCheetah-v1**
This is a trained model of a **SAC** agent playing **seals/HalfCheetah-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env seals/HalfCheetah-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/HalfCheetah-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo sac --env seals/HalfCheetah-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/HalfCheetah-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo sac --env seals/HalfCheetah-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env seals/HalfCheetah-v1 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 2048),
('buffer_size', 100000),
('gamma', 0.95),
('learning_rate', 0.000884624878315995),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'log_std_init': -0.6932709443503001,
'net_arch': [64, 64],
'use_sde': False}),
('tau', 0.01),
('train_freq', 64),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ernestum/sac-seals-Hopper-v1 | ernestum | 2023-09-18T07:52:51Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Hopper-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T11:53:03Z | ---
library_name: stable-baselines3
tags:
- seals/Hopper-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Hopper-v1
type: seals/Hopper-v1
metrics:
- type: mean_reward
value: 2279.30 +/- 124.09
name: mean_reward
verified: false
---
# **SAC** Agent playing **seals/Hopper-v1**
This is a trained model of a **SAC** agent playing **seals/Hopper-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env seals/Hopper-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Hopper-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo sac --env seals/Hopper-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Hopper-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo sac --env seals/Hopper-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env seals/Hopper-v1 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 100000),
('gamma', 0.98),
('learning_rate', 0.001709807687567946),
('learning_starts', 1000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'log_std_init': -1.6829391077276037,
'net_arch': [256, 256],
'use_sde': False}),
('tau', 0.08),
('train_freq', 32),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ernestum/sac-seals-Walker2d-v1 | ernestum | 2023-09-18T07:51:58Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Walker2d-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-18T07:50:56Z | ---
library_name: stable-baselines3
tags:
- seals/Walker2d-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Walker2d-v1
type: seals/Walker2d-v1
metrics:
- type: mean_reward
value: 5665.26 +/- 225.00
name: mean_reward
verified: false
---
# **SAC** Agent playing **seals/Walker2d-v1**
This is a trained model of a **SAC** agent playing **seals/Walker2d-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env seals/Walker2d-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Walker2d-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo sac --env seals/Walker2d-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Walker2d-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo sac --env seals/Walker2d-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env seals/Walker2d-v1 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 100000),
('gamma', 0.99),
('learning_rate', 0.0005845844772048097),
('learning_starts', 1000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'log_std_init': 0.1955317469998743,
'net_arch': [400, 300],
'use_sde': False}),
('tau', 0.02),
('train_freq', 1),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
nchen909/codellama-7b-chinese-sft-v1-deprecated | nchen909 | 2023-09-18T07:51:08Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-13T09:32:30Z | ---
library_name: peft
---
## Training data
alpaca_gpt4_zh
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
ernestum/ppo-seals-Walker2d-v1 | ernestum | 2023-09-18T07:48:56Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Walker2d-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T11:51:52Z | ---
library_name: stable-baselines3
tags:
- seals/Walker2d-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Walker2d-v1
type: seals/Walker2d-v1
metrics:
- type: mean_reward
value: 2465.56 +/- 272.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/Walker2d-v1**
This is a trained model of a **PPO** agent playing **seals/Walker2d-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Walker2d-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Walker2d-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Walker2d-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Walker2d-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/Walker2d-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/Walker2d-v1 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 8),
('clip_range', 0.4),
('ent_coef', 0.00013057334805552262),
('gae_lambda', 0.92),
('gamma', 0.98),
('learning_rate', 3.791707778339674e-05),
('max_grad_norm', 0.6),
('n_envs', 1),
('n_epochs', 5),
('n_steps', 2048),
('n_timesteps', 1000000.0),
('normalize',
{'gamma': 0.98, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.ReLU'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [256, 256], 'vf': [256, 256]}]}),
('vf_coef', 0.6167177795726859),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.98,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ernestum/ppo-seals-Humanoid-v1 | ernestum | 2023-09-18T07:47:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Humanoid-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-18T07:46:45Z | ---
library_name: stable-baselines3
tags:
- seals/Humanoid-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Humanoid-v1
type: seals/Humanoid-v1
metrics:
- type: mean_reward
value: 3224.12 +/- 925.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/Humanoid-v1**
This is a trained model of a **PPO** agent playing **seals/Humanoid-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Humanoid-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Humanoid-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Humanoid-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Humanoid-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/Humanoid-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/Humanoid-v1 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 0.2),
('ent_coef', 2.0745206045994986e-05),
('gae_lambda', 0.92),
('gamma', 0.999),
('learning_rate', 2.0309225666232827e-05),
('max_grad_norm', 0.5),
('n_envs', 1),
('n_epochs', 20),
('n_steps', 2048),
('n_timesteps', 10000000.0),
('normalize',
{'gamma': 0.999, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.ReLU'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [256, 256], 'vf': [256, 256]}]}),
('vf_coef', 0.819262464558427),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.999,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ernestum/ppo-seals-Ant-v1 | ernestum | 2023-09-18T07:44:45Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Ant-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T11:50:19Z | ---
library_name: stable-baselines3
tags:
- seals/Ant-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Ant-v1
type: seals/Ant-v1
metrics:
- type: mean_reward
value: 2461.22 +/- 674.80
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/Ant-v1**
This is a trained model of a **PPO** agent playing **seals/Ant-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Ant-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Ant-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Ant-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Ant-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/Ant-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/Ant-v1 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 16),
('clip_range', 0.3),
('ent_coef', 3.1441389214159857e-06),
('gae_lambda', 0.8),
('gamma', 0.995),
('learning_rate', 0.00017959211641976886),
('max_grad_norm', 0.9),
('n_epochs', 10),
('n_steps', 2048),
('n_timesteps', 1000000.0),
('normalize',
{'gamma': 0.995, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.Tanh'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [64, 64], 'vf': [64, 64]}]}),
('vf_coef', 0.4351450387648799),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.995,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ernestum/ppo-seals-MountainCar-v0 | ernestum | 2023-09-18T07:43:55Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T11:50:03Z | ---
library_name: stable-baselines3
tags:
- seals/MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/MountainCar-v0
type: seals/MountainCar-v0
metrics:
- type: mean_reward
value: -97.00 +/- 8.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/MountainCar-v0**
This is a trained model of a **PPO** agent playing **seals/MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/MountainCar-v0 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/MountainCar-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/MountainCar-v0 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/MountainCar-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/MountainCar-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/MountainCar-v0 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('clip_range', 0.2),
('ent_coef', 6.4940755116195606e-06),
('gae_lambda', 0.98),
('gamma', 0.99),
('learning_rate', 0.0004476103728105138),
('max_grad_norm', 1),
('n_envs', 16),
('n_epochs', 20),
('n_steps', 256),
('n_timesteps', 1000000.0),
('normalize',
{'gamma': 0.99, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.Tanh'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [64, 64], 'vf': [64, 64]}]}),
('vf_coef', 0.25988158989488963),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.99,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
FarizFirdaus/image_classification | FarizFirdaus | 2023-09-18T07:39:03Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-18T04:03:30Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.46875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4916
- Accuracy: 0.4688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 2.0695 | 0.1812 |
| No log | 2.0 | 40 | 2.0566 | 0.2062 |
| No log | 3.0 | 60 | 2.0300 | 0.2625 |
| No log | 4.0 | 80 | 1.9731 | 0.3125 |
| No log | 5.0 | 100 | 1.8858 | 0.3375 |
| No log | 6.0 | 120 | 1.7904 | 0.3438 |
| No log | 7.0 | 140 | 1.7051 | 0.3875 |
| No log | 8.0 | 160 | 1.6312 | 0.4 |
| No log | 9.0 | 180 | 1.5429 | 0.45 |
| No log | 10.0 | 200 | 1.4916 | 0.4688 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
prince99/results | prince99 | 2023-09-18T07:31:09Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-13b-chat-hf",
"region:us"
]
| null | 2023-09-18T07:30:37Z | ---
base_model: meta-llama/Llama-2-13b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
sd-dreambooth-library/my-cat | sd-dreambooth-library | 2023-09-18T07:24:49Z | 34 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-17T15:57:27Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### my-cat on Stable Diffusion via Dreambooth
#### model by hosnasn
This your the Stable Diffusion model fine-tuned the my-cat concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<cat-toy> toy**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




|
kming/unispeech-sat-base-plus-sv-finetuned-ami-ten-percent-train | kming | 2023-09-18T07:21:10Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"audio-xvector",
"generated_from_trainer",
"dataset:edinburghcstr/ami",
"base_model:microsoft/unispeech-sat-base-plus-sv",
"base_model:finetune:microsoft/unispeech-sat-base-plus-sv",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-18T07:11:54Z | ---
base_model: microsoft/unispeech-sat-base-plus-sv
tags:
- generated_from_trainer
datasets:
- edinburghcstr/ami
model-index:
- name: unispeech-sat-base-plus-sv-finetuned-ami-ten-percent-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unispeech-sat-base-plus-sv-finetuned-ami-ten-percent-train
This model is a fine-tuned version of [microsoft/unispeech-sat-base-plus-sv](https://huggingface.co/microsoft/unispeech-sat-base-plus-sv) on the ami dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
mussso/lora-trained-xl | mussso | 2023-09-18T07:18:45Z | 6 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-09-18T07:16:08Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of Kuroshiba raizo dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - mussso/lora-trained-xl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of Kuroshiba raizo dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Archfiend/ardic-ai-sd-fdb | Archfiend | 2023-09-18T07:17:21Z | 17 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-21T20:20:03Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ardic-ai-sd-fdb Dreambooth model trained by Archfiend
Sample pictures of this concept:
|
marcelsamyn/lora-trained-xl-folder | marcelsamyn | 2023-09-18T07:16:10Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:marcelsamyn/marcelsamyn3",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-18T06:27:31Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: marcelsamyn
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: false
datasets:
- marcelsamyn/marcelsamyn3
---
# LoRA DreamBooth - marcelsamyn/lora-trained-xl-folder
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained on the concept prompt:
`marcelsamyn`
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
# This is where you load your trained weights
pipe.load_lora_weights('marcelsamyn/lora-trained-xl-folder')
pipe.to("cuda")
prompt = "A majestic marcelsamyn jumping from a big stone at night"
image = pipe(prompt=prompt, num_inference_steps=50).images[0]
```
|
nanom/model-hf-vizwiz-bert-uncased | nanom | 2023-09-18T07:03:50Z | 116 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-18T07:02:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_vizwi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_vizwi
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6267 | 1.0 | 411 | 1.3774 |
| 1.3601 | 2.0 | 822 | 1.3225 |
| 1.2577 | 3.0 | 1233 | 1.2261 |
| 1.2343 | 4.0 | 1644 | 1.2729 |
| 1.1936 | 5.0 | 2055 | 1.2580 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
warp-ai/wuerstchen-prior-model-base | warp-ai | 2023-09-18T07:02:05Z | 24 | 1 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2306.00637",
"arxiv:1910.09700",
"license:mit",
"region:us"
]
| null | 2023-09-03T19:39:26Z | ---
license: mit
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/i-DYpDHw8Pwiy7QBKZVR5.jpeg" width=1500>
## Würstchen - Overview
Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce
computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make
use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial
compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a
two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://arxiv.org/abs/2306.00637)).
A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, allowing
also cheaper and faster inference.
## Würstchen - Prior
The Prior is what we refer to as "Stage C". It is the text-conditional model, operating in the small latent space that Stage A and Stage B encode images into. During
inference, its job is to generate the image latents given text. These image latents are then sent to Stages A & B to decode the latents into pixel space.
### Prior - Model - Base
This is the base checkpoint for the Prior (Stage C). This means this is only pretrained and generates mostly standard images. We recommend using the [interpolated model](https://huggingface.co/warp-ai/wuerstchen-prior-model-interpolated),
as this is our best checkpoint for the Prior (Stage C) because it was finetuned on a curated dataset. However, we recommend this checkpoint if you want to finetune Würstchen
on your own large dataset, as the other checkpoints are already biased towards being more artistic. This checkpoint should provide a fairly standard baseline to finetune
from, as long as your dataset is rather large.
**Note:** This checkpoint was also already trained on multi-aspect-ratios, meaning you can generate larger images than just 1024x1024. Sometimes generations up to 2048x2048
even work. Feel free to try it out!
**Also Note:** The base checkpoint usually requires a higher classifier-free-guidance value (`guidance_scale=8.0`) and also a negative caption in order to make good
looking images. The [interpolated model](https://huggingface.co/warp-ai/wuerstchen-prior-model-interpolated) and [finetuned model](https://huggingface.co/warp-ai/wuerstchen-prior-model-finetuned)
usually don't need a negative caption and work better with a lower classifier-free-guidance value (`guidance_scale=4.0`).
### Image Sizes
Würstchen was trained on image resolutions between 1024x1024 & 1536x1536. We sometimes also observe good outputs at resolutions like 1024x2048. Feel free to try it out.
We also observed that the Prior (Stage C) adapts extremely fast to new resolutions. So finetuning it at 2048x2048 should be computationally cheap.
<img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/IfVsUDcP15OY-5wyLYKnQ.jpeg" width=1000>
## How to run
This pipeline should be run together with https://huggingface.co/warp-ai/wuerstchen:
```py
import torch
from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline
from diffusers.pipelines.wuerstchen import WuerstchenPrior, default_stage_c_timesteps
device = "cuda"
dtype = torch.float16
num_images_per_prompt = 2
prior = WuerstchenPrior.from_pretrained("warp-ai/wuerstchen-prior-model-base", torch_dtype=dtype).to(device)
prior_pipeline = WuerstchenPriorPipeline.from_pretrained(
"warp-ai/wuerstchen-prior", prior=prior, torch_dtype=dtype
).to(device)
decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained(
"warp-ai/wuerstchen", torch_dtype=dtype
).to(device)
caption = "Anthropomorphic cat dressed as a fire fighter"
negative_prompt = "bad anatomy, blurry, fuzzy, extra arms, extra fingers, poorly drawn hands, disfigured, tiling, deformed, mutated, drawing"
prior_output = prior_pipeline(
prompt=caption,
height=1024,
width=1024,
timesteps=default_stage_c_timesteps,
negative_prompt=negative_prompt,
guidance_scale=8.0,
num_images_per_prompt=num_images_per_prompt,
)
decoder_output = decoder_pipeline(
image_embeddings=prior_output.image_embeddings,
prompt=caption,
negative_prompt=negative_prompt,
num_images_per_prompt=num_images_per_prompt,
guidance_scale=0.0,
output_type="pil",
).images
```
## Model Details
- **Developed by:** Pablo Pernias, Dominic Rampas
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** MIT
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a Diffusion model in the style of Stage C from the [Würstchen paper](https://arxiv.org/abs/2306.00637) that uses a fixed, pretrained text encoder ([CLIP ViT-bigG/14](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
- **Resources for more information:** [GitHub Repository](https://github.com/dome272/Wuerstchen), [Paper](https://arxiv.org/abs/2306.00637).
- **Cite as:**
@misc{pernias2023wuerstchen,
title={Wuerstchen: Efficient Pretraining of Text-to-Image Models},
author={Pablo Pernias and Dominic Rampas and Marc Aubreville},
year={2023},
eprint={2306.00637},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Environmental Impact
**Würstchen v2** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 24602
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 2275.68 kg CO2 eq. |
warp-ai/wuerstchen-prior-model-interpolated | warp-ai | 2023-09-18T07:01:48Z | 23 | 3 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2306.00637",
"arxiv:1910.09700",
"license:mit",
"region:us"
]
| null | 2023-09-03T19:45:43Z | ---
license: mit
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/i-DYpDHw8Pwiy7QBKZVR5.jpeg" width=1500>
## Würstchen - Overview
Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce
computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make
use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial
compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a
two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://arxiv.org/abs/2306.00637)).
A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, allowing
also cheaper and faster inference.
## Würstchen - Prior
The Prior is what we refer to as "Stage C". It is the text-conditional model, operating in the small latent space that Stage A and Stage B encode images into. During
inference, its job is to generate the image latents given text. These image latents are then sent to Stages A & B to decode the latents into pixel space.
### Prior - Model - Interpolated
The interpolated model is our current best Prior (Stage C) checkpoint. It is an interpolation between our [base model](https://huggingface.co/warp-ai/wuerstchen-prior-model-base) and the [finetuned model](https://huggingface.co/warp-ai/wuerstchen-prior-model-finetuned).
We created this interpolation because the finetuned model became too artistic and often only generates artistic images. The base model, however, usually is very photorealistic.
As a result, we combined both by interpolating their weights by 50%, so the middle between the base and finetuned model (`0.5 * base_weights + 0.5 * finetuned_weights`).
You can also interpolate the [base model](https://huggingface.co/warp-ai/wuerstchen-prior-model-base) and the [finetuned model](https://huggingface.co/warp-ai/wuerstchen-prior-model-finetuned)
as you want and maybe find an interpolation that fits your needs better than this checkpoint.
### Image Sizes
Würstchen was trained on image resolutions between 1024x1024 & 1536x1536. We sometimes also observe good outputs at resolutions like 1024x2048. Feel free to try it out.
We also observed that the Prior (Stage C) adapts extremely fast to new resolutions. So finetuning it at 2048x2048 should be computationally cheap.
<img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/IfVsUDcP15OY-5wyLYKnQ.jpeg" width=1000>
## How to run
This pipeline should be run together with https://huggingface.co/warp-ai/wuerstchen:
```py
import torch
from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline
from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS
device = "cuda"
dtype = torch.float16
num_images_per_prompt = 2
prior_pipeline = WuerstchenPriorPipeline.from_pretrained(
"warp-ai/wuerstchen-prior", torch_dtype=dtype
).to(device)
decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained(
"warp-ai/wuerstchen", torch_dtype=dtype
).to(device)
caption = "Anthropomorphic cat dressed as a fire fighter"
negative_prompt = ""
prior_output = prior_pipeline(
prompt=caption,
height=1024,
width=1536,
timesteps=DEFAULT_STAGE_C_TIMESTEPS,
negative_prompt=negative_prompt,
guidance_scale=4.0,
num_images_per_prompt=num_images_per_prompt,
)
decoder_output = decoder_pipeline(
image_embeddings=prior_output.image_embeddings,
prompt=caption,
negative_prompt=negative_prompt,
guidance_scale=0.0,
output_type="pil",
).images
```
## Model Details
- **Developed by:** Pablo Pernias, Dominic Rampas
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** MIT
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a Diffusion model in the style of Stage C from the [Würstchen paper](https://arxiv.org/abs/2306.00637) that uses a fixed, pretrained text encoder ([CLIP ViT-bigG/14](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
- **Resources for more information:** [GitHub Repository](https://github.com/dome272/Wuerstchen), [Paper](https://arxiv.org/abs/2306.00637).
- **Cite as:**
@misc{pernias2023wuerstchen,
title={Wuerstchen: Efficient Pretraining of Text-to-Image Models},
author={Pablo Pernias and Dominic Rampas and Marc Aubreville},
year={2023},
eprint={2306.00637},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Environmental Impact
**Würstchen v2** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 24602
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 2275.68 kg CO2 eq.
|
warp-ai/wuerstchen-prior | warp-ai | 2023-09-18T07:01:28Z | 390 | 21 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2306.00637",
"arxiv:1910.09700",
"license:mit",
"diffusers:WuerstchenPriorPipeline",
"region:us"
]
| null | 2023-07-19T19:09:44Z | ---
license: mit
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/i-DYpDHw8Pwiy7QBKZVR5.jpeg" width=1500>
## Würstchen - Overview
Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce
computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make
use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial
compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a
two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://arxiv.org/abs/2306.00637)).
A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, allowing
also cheaper and faster inference.
## Würstchen - Prior
The Prior is what we refer to as "Stage C". It is the text-conditional model, operating in the small latent space that Stage A and Stage B encode images into. During
inference, its job is to generate the image latents given text. These image latents are then sent to Stages A & B to decode the latents into pixel space.
### Image Sizes
Würstchen was trained on image resolutions between 1024x1024 & 1536x1536. We sometimes also observe good outputs at resolutions like 1024x2048. Feel free to try it out.
We also observed that the Prior (Stage C) adapts extremely fast to new resolutions. So finetuning it at 2048x2048 should be computationally cheap.
<img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/5pA5KUfGmvsObqiIjdGY1.jpeg" width=1000>
## How to run
This pipeline should be run together with https://huggingface.co/warp-ai/wuerstchen:
```py
import torch
from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline
from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS
device = "cuda"
dtype = torch.float16
num_images_per_prompt = 2
prior_pipeline = WuerstchenPriorPipeline.from_pretrained(
"warp-ai/wuerstchen-prior", torch_dtype=dtype
).to(device)
decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained(
"warp-ai/wuerstchen", torch_dtype=dtype
).to(device)
caption = "Anthropomorphic cat dressed as a fire fighter"
negative_prompt = ""
prior_output = prior_pipeline(
prompt=caption,
height=1024,
width=1536,
timesteps=DEFAULT_STAGE_C_TIMESTEPS,
negative_prompt=negative_prompt,
guidance_scale=4.0,
num_images_per_prompt=num_images_per_prompt,
)
decoder_output = decoder_pipeline(
image_embeddings=prior_output.image_embeddings,
prompt=caption,
negative_prompt=negative_prompt,
guidance_scale=0.0,
output_type="pil",
).images
```
### Image Sampling Times
The figure shows the inference times (on an A100) for different batch sizes (`num_images_per_prompt`) on Würstchen compared to [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) (without refiner).
The left figure shows inference times (using torch > 2.0), whereas the right figure applies `torch.compile` to both pipelines in advance.

## Model Details
- **Developed by:** Pablo Pernias, Dominic Rampas
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** MIT
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a Diffusion model in the style of Stage C from the [Würstchen paper](https://arxiv.org/abs/2306.00637) that uses a fixed, pretrained text encoder ([CLIP ViT-bigG/14](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
- **Resources for more information:** [GitHub Repository](https://github.com/dome272/Wuerstchen), [Paper](https://arxiv.org/abs/2306.00637).
- **Cite as:**
@misc{pernias2023wuerstchen,
title={Wuerstchen: Efficient Pretraining of Text-to-Image Models},
author={Pablo Pernias and Dominic Rampas and Marc Aubreville},
year={2023},
eprint={2306.00637},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Environmental Impact
**Würstchen v2** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 24602
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 2275.68 kg CO2 eq.
|
Abhay1212/news_demo | Abhay1212 | 2023-09-18T06:57:11Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T06:52:21Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
vishal0719/llama-fine-tuned-qa | vishal0719 | 2023-09-18T06:43:06Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"question-answering",
"dataset:junaid20/qa_assignment",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:finetune:NousResearch/Llama-2-7b-chat-hf",
"region:us"
]
| question-answering | 2023-09-18T06:22:26Z | ---
base_model: NousResearch/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: llama-fine-tuned-qa
results: []
datasets:
- junaid20/qa_assignment
pipeline_tag: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-fine-tuned-qa
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3 |
urbija/ner-bio-annotated-6 | urbija | 2023-09-18T06:30:42Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-18T05:05:06Z | ---
tags:
- generated_from_trainer
model-index:
- name: ner-bio-annotated-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-bio-annotated-6
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 3
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Archolic/SDArchitecture | Archolic | 2023-09-18T06:23:40Z | 0 | 0 | null | [
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-18T06:19:56Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Original GitHub Repository
1. Download the weights
- [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference
- [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* |
ksks94/testmm | ksks94 | 2023-09-18T06:16:37Z | 3 | 0 | transformers | [
"transformers",
"object-detection",
"endpoints_compatible",
"region:us"
]
| object-detection | 2023-09-18T06:10:30Z | ---
pipeline_tag: object-detection
--- |
Mahendrakharra/llama2_original_fine_tuned | Mahendrakharra | 2023-09-18T06:09:56Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T05:15:19Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
kming/wav2vec2-base-superb-sv-finetuned-ami-ten-percent-train-new | kming | 2023-09-18T06:07:31Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-xvector",
"generated_from_trainer",
"dataset:edinburghcstr/ami",
"base_model:anton-l/wav2vec2-base-superb-sv",
"base_model:finetune:anton-l/wav2vec2-base-superb-sv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-15T09:23:26Z | ---
license: apache-2.0
base_model: anton-l/wav2vec2-base-superb-sv
tags:
- generated_from_trainer
datasets:
- edinburghcstr/ami
model-index:
- name: wav2vec2-base-superb-sv-finetuned-ami-ten-percent-train-normalized
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-superb-sv-finetuned-ami-ten-percent-train-normalized
This model is a fine-tuned version of [anton-l/wav2vec2-base-superb-sv](https://huggingface.co/anton-l/wav2vec2-base-superb-sv) on the ami dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
TamerAbdelaziz/distilbert-base-uncased-finetuned-sst2 | TamerAbdelaziz | 2023-09-18T05:56:36Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T05:36:37Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: TamerAbdelaziz/distilbert-base-uncased-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TamerAbdelaziz/distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0592
- Validation Loss: 0.2958
- Train Accuracy: 0.9060
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 12627, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2123 | 0.2546 | 0.9014 | 0 |
| 0.1023 | 0.2641 | 0.8899 | 1 |
| 0.0592 | 0.2958 | 0.9060 | 2 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
RinkaEmina/RVC_Fusion | RinkaEmina | 2023-09-18T05:56:24Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2023-09-18T05:13:28Z | ---
license: other
---
---
https://docs.google.com/spreadsheets/d/1-vFAMWFJC7COlaMhpszPllvE9fu34w68/edit?usp=sharing&ouid=110582283716832233598&rtpof=true&sd=true
---
Fusion RVC Model from many characters are here
FREE TO USE
---
NO COPYRIGHT since it Fusion/combine from several voice
---
IF U WANT BECOME VTUBER WITH AI VOICE, HERE ARE THE MODELS FREE
---
PLEASE NOTIFY ME IF USE ONE, I'LL RENAME THE MODEL INTO YOUR CHARACTER NAME!!!
---
PLEASE READ THE EXCEL FOR THE DETAIL
--- |
CyberHarem/mizumoto_yukari_idolmastercinderellagirls | CyberHarem | 2023-09-18T05:50:54Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/mizumoto_yukari_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-18T05:29:09Z | ---
license: mit
datasets:
- CyberHarem/mizumoto_yukari_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of mizumoto_yukari_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7280, you need to download `7280/mizumoto_yukari_idolmastercinderellagirls.pt` as the embedding and `7280/mizumoto_yukari_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7280**, with the score of 0.975. The trigger words are:
1. `mizumoto_yukari_idolmastercinderellagirls`
2. `brown_hair, long_hair, brown_eyes, blush, smile, bangs, open_mouth, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.971 | [Download](7800/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](7800/previews/pattern_5.png) | [<NSFW, click to see>](7800/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](7800/previews/pattern_12.png) | [<NSFW, click to see>](7800/previews/pattern_13.png) |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| **7280** | **0.975** | [**Download**](7280/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](7280/previews/pattern_5.png) | [<NSFW, click to see>](7280/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](7280/previews/pattern_12.png) | [<NSFW, click to see>](7280/previews/pattern_13.png) |  | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.965 | [Download](6760/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](6760/previews/pattern_5.png) | [<NSFW, click to see>](6760/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](6760/previews/pattern_12.png) | [<NSFW, click to see>](6760/previews/pattern_13.png) |  | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.964 | [Download](6240/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](6240/previews/pattern_5.png) | [<NSFW, click to see>](6240/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](6240/previews/pattern_12.png) | [<NSFW, click to see>](6240/previews/pattern_13.png) |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.975 | [Download](5720/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](5720/previews/pattern_5.png) | [<NSFW, click to see>](5720/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](5720/previews/pattern_12.png) | [<NSFW, click to see>](5720/previews/pattern_13.png) |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.972 | [Download](5200/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](5200/previews/pattern_5.png) | [<NSFW, click to see>](5200/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](5200/previews/pattern_12.png) | [<NSFW, click to see>](5200/previews/pattern_13.png) |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.968 | [Download](4680/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4680/previews/pattern_5.png) | [<NSFW, click to see>](4680/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](4680/previews/pattern_12.png) | [<NSFW, click to see>](4680/previews/pattern_13.png) |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.966 | [Download](4160/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4160/previews/pattern_5.png) | [<NSFW, click to see>](4160/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](4160/previews/pattern_12.png) | [<NSFW, click to see>](4160/previews/pattern_13.png) |  | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.969 | [Download](3640/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3640/previews/pattern_5.png) | [<NSFW, click to see>](3640/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](3640/previews/pattern_12.png) | [<NSFW, click to see>](3640/previews/pattern_13.png) |  | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.967 | [Download](3120/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3120/previews/pattern_5.png) | [<NSFW, click to see>](3120/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](3120/previews/pattern_12.png) | [<NSFW, click to see>](3120/previews/pattern_13.png) |  | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.967 | [Download](2600/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2600/previews/pattern_5.png) | [<NSFW, click to see>](2600/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](2600/previews/pattern_12.png) | [<NSFW, click to see>](2600/previews/pattern_13.png) |  | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.960 | [Download](2080/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2080/previews/pattern_5.png) | [<NSFW, click to see>](2080/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](2080/previews/pattern_12.png) | [<NSFW, click to see>](2080/previews/pattern_13.png) |  | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.961 | [Download](1560/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1560/previews/pattern_5.png) | [<NSFW, click to see>](1560/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](1560/previews/pattern_12.png) | [<NSFW, click to see>](1560/previews/pattern_13.png) |  | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.960 | [Download](1040/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1040/previews/pattern_5.png) | [<NSFW, click to see>](1040/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](1040/previews/pattern_12.png) | [<NSFW, click to see>](1040/previews/pattern_13.png) |  | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.958 | [Download](520/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](520/previews/pattern_5.png) | [<NSFW, click to see>](520/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](520/previews/pattern_12.png) | [<NSFW, click to see>](520/previews/pattern_13.png) |  | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
SamJoshua/phi-1_5-finetuned-gsm8k | SamJoshua | 2023-09-18T05:50:16Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"mixformer-sequential",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-09-18T04:38:41Z | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-gsm8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-gsm8k
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ailoveydovey/arfkwmx | ailoveydovey | 2023-09-18T05:43:57Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-18T05:13:51Z | ---
license: creativeml-openrail-m
---
|
Panchovix/Synthia-70B-v1.2b-safetensors | Panchovix | 2023-09-18T05:13:13Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-18T03:09:27Z | ---
license: llama2
---
Safetensors conversion of Synthia-70B-v1.2b (https://huggingface.co/migtissera/Synthia-70B-v1.2b). Can be used directly on transformers, or to be used to convert/quant models with exllamav2. |
ailabturkiye/sempatuco | ailabturkiye | 2023-09-18T05:04:01Z | 0 | 0 | null | [
"tr",
"license:openrail",
"region:us"
]
| null | 2023-08-09T13:54:30Z | ---
license: openrail
language:
- tr
--- |
slplab/asd_pron_w2v_reg_balanced_500_79_corr | slplab | 2023-09-18T04:55:20Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-14T06:25:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: asd_pron_w2v_reg_balanced_500_79_corr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asd_pron_w2v_reg_balanced_500_79_corr
This model is a fine-tuned version of [slplab/wav2vec2-xls-r-300m_phone-mfa_korean](https://huggingface.co/slplab/wav2vec2-xls-r-300m_phone-mfa_korean) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5603
- Spearman Correlation: 0.7313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5646 | 1.0 | 308 | 0.5827 | 0.3993 |
| 0.3926 | 2.0 | 616 | 0.5692 | 0.4890 |
| 0.3873 | 3.0 | 924 | 0.5861 | 0.4202 |
| 0.3914 | 4.0 | 1232 | 0.5698 | 0.4443 |
| 0.3897 | 5.0 | 1540 | 0.5743 | 0.4825 |
| 0.3895 | 6.0 | 1848 | 0.5707 | 0.4169 |
| 0.3899 | 7.0 | 2156 | 0.5747 | 0.5449 |
| 0.3909 | 8.0 | 2464 | 0.5701 | 0.5510 |
| 0.3875 | 9.0 | 2772 | 0.5634 | 0.5389 |
| 0.3874 | 10.0 | 3080 | 0.5662 | 0.6041 |
| 0.3938 | 11.0 | 3388 | 0.5651 | 0.6427 |
| 0.3895 | 12.0 | 3696 | 0.5642 | 0.5501 |
| 0.3907 | 13.0 | 4004 | 0.5773 | 0.6043 |
| 0.389 | 14.0 | 4312 | 0.5697 | 0.6621 |
| 0.3887 | 15.0 | 4620 | 0.5563 | 0.6863 |
| 0.3882 | 16.0 | 4928 | 0.5647 | 0.6770 |
| 0.3907 | 17.0 | 5236 | 0.5719 | 0.6693 |
| 0.3903 | 18.0 | 5544 | 0.5610 | 0.7061 |
| 0.3905 | 19.0 | 5852 | 0.5616 | 0.6852 |
| 0.3877 | 20.0 | 6160 | 0.5722 | 0.6875 |
| 0.3874 | 21.0 | 6468 | 0.5647 | 0.6902 |
| 0.3901 | 22.0 | 6776 | 0.5619 | 0.7125 |
| 0.3913 | 23.0 | 7084 | 0.5717 | 0.6813 |
| 0.3857 | 24.0 | 7392 | 0.5533 | 0.7139 |
| 0.387 | 25.0 | 7700 | 0.5676 | 0.7143 |
| 0.3878 | 26.0 | 8008 | 0.5631 | 0.7118 |
| 0.3877 | 27.0 | 8316 | 0.5582 | 0.7276 |
| 0.389 | 28.0 | 8624 | 0.5660 | 0.7354 |
| 0.3909 | 29.0 | 8932 | 0.5623 | 0.7357 |
| 0.3876 | 30.0 | 9240 | 0.5603 | 0.7313 |
### Framework versions
- Transformers 4.13.0
- Pytorch 2.0.0+cu118
- Datasets 2.14.4
- Tokenizers 0.10.3
|
yaboidimsum/image_classification | yaboidimsum | 2023-09-18T04:52:14Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-11T13:21:26Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9604519774011302
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9011
- Accuracy: 0.9605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.8906 | 0.9605 |
| No log | 2.0 | 80 | 1.6868 | 0.9605 |
| No log | 3.0 | 120 | 1.6471 | 0.9605 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ldhldh/7b_8bit_qlora_explain | ldhldh | 2023-09-18T04:45:30Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T04:45:14Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
shivr/TinyLlama_grit_and_local-narratives_lora | shivr | 2023-09-18T04:23:57Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T04:23:50Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
ys7yoo/sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep7_ckpt | ys7yoo | 2023-09-18T04:19:15Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3",
"base_model:finetune:ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T03:57:43Z | ---
base_model: ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep7_ckpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep7_ckpt
This model is a fine-tuned version of [ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3](https://huggingface.co/ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3202
- Mse: 0.3202
- Mae: 0.4109
- R2: 0.8534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 1.0857 | 1.0 | 183 | 0.4208 | 0.4208 | 0.4787 | 0.8073 |
| 0.1397 | 2.0 | 366 | 0.3135 | 0.3135 | 0.4191 | 0.8565 |
| 0.0989 | 3.0 | 549 | 0.3468 | 0.3468 | 0.4261 | 0.8412 |
| 0.0757 | 4.0 | 732 | 0.3006 | 0.3006 | 0.3959 | 0.8623 |
| 0.0601 | 5.0 | 915 | 0.4034 | 0.4034 | 0.4669 | 0.8153 |
| 0.0502 | 6.0 | 1098 | 0.3357 | 0.3357 | 0.4221 | 0.8463 |
| 0.0429 | 7.0 | 1281 | 0.3202 | 0.3202 | 0.4109 | 0.8534 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
smurli/ppo-LunarLander-v2 | smurli | 2023-09-18T04:12:02Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-18T04:10:21Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.06 +/- 18.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
m-aliabbas1/erc_question_big_model | m-aliabbas1 | 2023-09-18T04:01:58Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-09-18T04:01:12Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# m-aliabbas1/erc_question_big_model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("m-aliabbas1/erc_question_big_model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
dsmsb/16_combo_webscrap_1709_v2_reduce_others | dsmsb | 2023-09-18T04:00:21Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T01:47:02Z | ---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 16_combo_webscrap_1709_v2_reduce_others
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 16_combo_webscrap_1709_v2_reduce_others
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1501
- Accuracy: 0.9636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 363 | 1.0481 | 0.7263 |
| 1.5287 | 2.0 | 726 | 0.5613 | 0.8655 |
| 0.6856 | 3.0 | 1089 | 0.3666 | 0.9121 |
| 0.6856 | 4.0 | 1452 | 0.2880 | 0.9284 |
| 0.4313 | 5.0 | 1815 | 0.2187 | 0.9464 |
| 0.3097 | 6.0 | 2178 | 0.1992 | 0.9505 |
| 0.2454 | 7.0 | 2541 | 0.1627 | 0.9598 |
| 0.2454 | 8.0 | 2904 | 0.1501 | 0.9636 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
axelit64/image_classification | axelit64 | 2023-09-18T03:56:43Z | 229 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-18T03:07:32Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.575
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3340
- Accuracy: 0.575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.5156 | 0.45 |
| No log | 2.0 | 80 | 1.4200 | 0.4562 |
| No log | 3.0 | 120 | 1.3790 | 0.5 |
| No log | 4.0 | 160 | 1.2859 | 0.525 |
| No log | 5.0 | 200 | 1.2592 | 0.5125 |
| No log | 6.0 | 240 | 1.3145 | 0.55 |
| No log | 7.0 | 280 | 1.3267 | 0.4813 |
| No log | 8.0 | 320 | 1.3288 | 0.5 |
| No log | 9.0 | 360 | 1.3073 | 0.5 |
| No log | 10.0 | 400 | 1.3066 | 0.5188 |
| No log | 11.0 | 440 | 1.2691 | 0.5563 |
| No log | 12.0 | 480 | 1.2809 | 0.5437 |
| 0.876 | 13.0 | 520 | 1.2963 | 0.5625 |
| 0.876 | 14.0 | 560 | 1.2965 | 0.5312 |
| 0.876 | 15.0 | 600 | 1.3542 | 0.5188 |
| 0.876 | 16.0 | 640 | 1.3489 | 0.5125 |
| 0.876 | 17.0 | 680 | 1.3146 | 0.5687 |
| 0.876 | 18.0 | 720 | 1.2442 | 0.575 |
| 0.876 | 19.0 | 760 | 1.3497 | 0.575 |
| 0.876 | 20.0 | 800 | 1.3316 | 0.5437 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
shaowenchen/chinese-alpaca-2-13b-gguf | shaowenchen | 2023-09-18T03:44:45Z | 100 | 0 | null | [
"gguf",
"meta",
"llama",
"llama-2",
"alpaca",
"alpaca-2",
"chinese",
"text-generation",
"zh",
"license:other",
"region:us"
]
| text-generation | 2023-09-16T23:34:00Z | ---
inference: false
language:
- zh
license: other
model_creator: ziqingyang
model_link: https://huggingface.co/ziqingyang/chinese-alpaca-2-13b
model_name: chinese-alpaca-2-13b
model_type: llama
pipeline_tag: text-generation
quantized_by: shaowenchen
tasks:
- text2text-generation
tags:
- meta
- gguf
- llama
- llama-2
- alpaca
- alpaca-2
- chinese
---
## Provided files
| Name | Quant method | Size |
| -------------------------------- | ------------ | ------- |
| chinese-alpaca-2-13b.Q2_K.gguf | Q2_K | 5.2 GB |
| chinese-alpaca-2-13b.Q3_K.gguf | Q3_K | 6.0 GB |
| chinese-alpaca-2-13b.Q3_K_L.gguf | Q3_K_L | 6.6 GB |
| chinese-alpaca-2-13b.Q3_K_S.gguf | Q3_K_S | 5.4 GB |
| chinese-alpaca-2-13b.Q4_0.gguf | Q4_0 | 7.0 GB |
| chinese-alpaca-2-13b.Q4_1.gguf | Q4_1 | 7.8 GB |
| chinese-alpaca-2-13b.Q4_K.gguf | Q4_K | 7.5 GB |
| chinese-alpaca-2-13b.Q4_K_S.gguf | Q4_K_S | 7.1 GB |
| chinese-alpaca-2-13b.Q5_0.gguf | Q5_0 | 8.5 GB |
| chinese-alpaca-2-13b.Q5_1.gguf | Q5_1 | 9.3 GB |
| chinese-alpaca-2-13b.Q5_K.gguf | Q5_K | 8.8 GB |
| chinese-alpaca-2-13b.Q5_K_S.gguf | Q5_K_S | 8.5 GB |
| chinese-alpaca-2-13b.Q6_K.gguf | Q6_K | 10.0 GB |
| chinese-alpaca-2-13b.Q8_0.gguf | Q8_0 | 13.0 GB |
| chinese-alpaca-2-13b.gguf | full | 25.0 GB |
Usage:
```
docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest
```
and you can view http://localhost:8000/docs to see the swagger UI. |
Chickenfish/Dayte_dreambooth | Chickenfish | 2023-09-18T03:41:11Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-08-22T07:18:04Z | ---
license: creativeml-openrail-m
---
|
ZiaPratama/image_classification | ZiaPratama | 2023-09-18T03:39:42Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-16T08:52:55Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3659
- Accuracy: 0.5375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 32 | 1.9290 | 0.3063 |
| No log | 2.0 | 64 | 1.6622 | 0.3563 |
| No log | 3.0 | 96 | 1.5753 | 0.3937 |
| No log | 4.0 | 128 | 1.5099 | 0.475 |
| No log | 5.0 | 160 | 1.4614 | 0.4313 |
| No log | 6.0 | 192 | 1.4104 | 0.5 |
| No log | 7.0 | 224 | 1.3962 | 0.4562 |
| No log | 8.0 | 256 | 1.3535 | 0.5437 |
| No log | 9.0 | 288 | 1.3483 | 0.5062 |
| No log | 10.0 | 320 | 1.3994 | 0.45 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
rafelsiregar/image_classification | rafelsiregar | 2023-09-18T03:35:49Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-16T17:19:24Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3341
- Accuracy: 0.5375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 80 | 1.3975 | 0.4062 |
| No log | 2.0 | 160 | 1.3917 | 0.4875 |
| No log | 3.0 | 240 | 1.2964 | 0.5 |
| No log | 4.0 | 320 | 1.2587 | 0.5312 |
| No log | 5.0 | 400 | 1.2705 | 0.5125 |
| No log | 6.0 | 480 | 1.2557 | 0.55 |
| 0.7469 | 7.0 | 560 | 1.3400 | 0.525 |
| 0.7469 | 8.0 | 640 | 1.3586 | 0.5687 |
| 0.7469 | 9.0 | 720 | 1.3317 | 0.5563 |
| 0.7469 | 10.0 | 800 | 1.2965 | 0.5687 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
heegyu/WizardVicuna-open-llama-3b-v2 | heegyu | 2023-09-18T03:30:22Z | 9,862 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"dataset:heegyu/wizard_vicuna_70k_v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-25T04:35:12Z | ---
datasets:
- heegyu/wizard_vicuna_70k_v2
license: apache-2.0
---
Hyperparameters
- 3/8 epoch(3rd epoch checkpoing while 8epoch training)
- 1e-4 -> 1e-5 with cosine lr decay
- batch size 128
- max sequence length 2048
- AdamW(weigth decay=0.01, b1=0.9, b2=0.99, grad_clip=1.0)
- no warmup
- BF16
- Base Model: [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2)
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("heegyu/WizardVicuna-open-llama-3b-v2")
model = AutoModelForCausalLM.from_pretrained("heegyu/WizardVicuna-open-llama-3b-v2")
inputs = tokenizer(["Human: Hi, nice to meet you!\n\nAssistant: "], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=16)
print(tokenizer.batch_decode(outputs, skip_special_tokens=False))
```
output: `['Human: Hi, nice to meet you!\n\nAssistant: Hello. Great to meet you too. Well, how can I assist you today?<|endoftext|>']` |
LarryAIDraw/robo_3 | LarryAIDraw | 2023-09-18T03:29:35Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-18T03:28:39Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/147022/shiori-saginomiya-robo-joshikousei-no-mudazukai |
LarryAIDraw/wota_10 | LarryAIDraw | 2023-09-18T03:29:22Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-18T03:28:15Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/147026/akane-kikuchi-wota-joshikousei-no-mudazukai |
nemesis1/chlldrgnrc | nemesis1 | 2023-09-18T03:28:31Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-18T03:28:31Z | ---
license: creativeml-openrail-m
---
|
ys7yoo/sts_roberta-large_lr1e-05_wd1e-03_ep7_ckpt | ys7yoo | 2023-09-18T03:26:33Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:klue/roberta-large",
"base_model:finetune:klue/roberta-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T03:03:05Z | ---
base_model: klue/roberta-large
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: sts_roberta-large_lr1e-05_wd1e-03_ep7_ckpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sts_roberta-large_lr1e-05_wd1e-03_ep7_ckpt
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3621
- Mse: 0.3621
- Mae: 0.4438
- R2: 0.8342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 1.8712 | 1.0 | 183 | 0.5118 | 0.5118 | 0.5409 | 0.7656 |
| 0.1606 | 2.0 | 366 | 0.4621 | 0.4621 | 0.5142 | 0.7884 |
| 0.1111 | 3.0 | 549 | 0.4687 | 0.4687 | 0.5088 | 0.7854 |
| 0.0837 | 4.0 | 732 | 0.4317 | 0.4317 | 0.4906 | 0.8023 |
| 0.0681 | 5.0 | 915 | 0.4662 | 0.4662 | 0.5091 | 0.7865 |
| 0.0559 | 6.0 | 1098 | 0.3742 | 0.3742 | 0.4524 | 0.8286 |
| 0.0485 | 7.0 | 1281 | 0.3621 | 0.3621 | 0.4438 | 0.8342 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
LarryAIDraw/schwarz_arknights | LarryAIDraw | 2023-09-18T03:25:10Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-18T03:19:18Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/130905/schwarz-arknights |
EldritchAdam/LaxpeintXL | EldritchAdam | 2023-09-18T03:10:49Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2023-09-04T19:16:06Z | ---
license: openrail
---
<div><p><strong><span style="color:rgb(250, 82, 82)">LaxpeintXL - tentatively final version for SDXL 1.0</span></strong></p>
<p>This model is a companion to <a target="_blank" rel="ugc" href="https://huggingface.co/EldritchAdam/ClassipeintXL">ClassipeintXL</a>. Although I see ClassipeintXL as really crucial to SDXL (and how I use it) LaxpeintXL is not so obviously necessary. You can get much of this style with the right combination of artist names and aesthetic terms. So why use a LoRA?</p>
<p>As much as SDXL is a huge leap forward from SD2, it shares a failing - albeit to a much lesser extent - that keeping an aesthetic consistent is very difficult. The same terms and artist names will not have the same effect for a portrait as for a landscape or a sci-fi scene etc.</p>
<p>This LoRA helps you to more consistently get that slick digital paint style in every image. Prompt for whatever you want, it's going to be beautiful.</p>
<p><strong><em><span style="color:rgb(190, 75, 219)">Recommended settings for use:</span></em></strong></p><p><a target="_blank" rel="ugc" href="https://pastebin.com/tXKwTkxC"><strong><em><span style="color:rgb(76, 110, 245)">You can go here (pastebin) to download a ComfyUI workflow</span></em></strong></a><span style="color:rgb(34, 139, 230)"> like what I used, but without custom nodes that are embedded in my image uploads on CivitAI.</span></p>
<ul>
<li>
<p>Start with a full 1.0 LoRA strength and adjust down to 0.7 or 0.8 for a subtler painterly effect. You can adjust upward (to 1.2 or maybe a little more) to maximize the painterly appearance, but it can start to introduce some quirks</p>
</li>
<li>
<p>Use the LoRA with your preferred SDXL model with no refiner. I have so far just stuck with base SDXL1.0 but other finetunes work great as well.</p>
</li>
<li>
<p>I recommend the DPM samplers, but use your favorite. Some may produce softer painting styles that don't suit my taste as much but whatever you prefer is great.</p>
</li>
<li>
<p>Don't do anything special for your prompt - just describe what you want to see. You don't really need to use any keywords unless some subject matter seems to override the LoRA's style, then you can bring it back in line by using the terms "digital painting of..." and "by LaxpeintXL".</p>
</li>
</ul>
</div>
<div style="max-width:500px">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/0B4gg9e6HNzYI-2dJzIZH.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/gH9bA1TDD2S_bJzheUXr_.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/cu0EyW4eOqr9iVhTN2Cgc.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/o0El5-8ms0J-Ae1gqNi71.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/CbnMKPkqAXM4st88RqXmj.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/mCmmJXYUmD8QamftYjWuQ.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/z4DXgHzHjKbh1mkfW7ur_.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/YdvSPWp38oa-JZgEqnEfp.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/zR1huUXvEl7b6kFdbuxRg.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/jiFLLFahcoE72BcjFKuws.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/8JB6sAgRnaHJ5jsgTpHki.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/LJQJw0V1E3NCdVEMUwgW7.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/LyZL9NLV2mSxQtQae4trO.png">
</div> |
nemesis1/colab | nemesis1 | 2023-09-18T03:09:18Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-18T03:09:18Z | ---
license: creativeml-openrail-m
---
|
TigerResearch/tigerbot-13b-chat-4bit | TigerResearch | 2023-09-18T03:01:39Z | 5 | 1 | transformers | [
"transformers",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-30T08:01:30Z | ---
license: apache-2.0
---
<div style="width: 100%;">
<img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" style="width: 20%; display: block; margin: auto;">
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a>
</p>
This is a 4-bit GPTQ version of the [Tigerbot 13b chat](https://huggingface.co/TigerResearch/tigerbot-13b-chat).
It was quantized to 4bit using: https://github.com/PanQiWei/AutoGPTQ
## How to download and use this model in github: https://github.com/TigerResearch/TigerBot
Here are commands to clone the TigerBot and install.
```
conda create --name tigerbot python=3.8
conda activate tigerbot
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
git clone https://github.com/TigerResearch/TigerBot
cd TigerBot
pip install -r requirements.txt
```
Inference with command line interface
infer with exllama
```
# 安装exllama_lib
pip install exllama_lib@git+https://github.com/taprosoft/exllama.git
# 启动推理
CUDA_VISIBLE_DEVICES=0 python other_infer/exllama_infer.py --model_path TigerResearch/tigerbot-13b-chat-4bit
```
infer with auto-gptq
```
# 安装auto-gptq
pip install auto-gptq
# 启动推理
CUDA_VISIBLE_DEVICES=0 python other_infer/gptq_infer.py --model_path TigerResearch/tigerbot-13b-chat-4bit
``` |
AyanKumarBhunia/textual_inversion_cat | AyanKumarBhunia | 2023-09-18T02:49:48Z | 30 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-18T02:21:58Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - AyanKumarBhunia/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
DHUGH/ppo-LunarLander-v0 | DHUGH | 2023-09-18T02:44:52Z | 2 | 1 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-18T02:35:32Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.71 +/- 24.39
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Kalaiarasi24/my_awesome_qa_model | Kalaiarasi24 | 2023-09-18T02:33:36Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-17T23:36:09Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.5298 |
| 2.8273 | 2.0 | 500 | 2.1338 |
| 2.8273 | 3.0 | 750 | 2.1118 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
hegelty/KcBERT-Base-finetuned-hate | hegelty | 2023-09-18T02:30:55Z | 113 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"ko",
"license:bsd",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T04:23:36Z | ---
license: bsd
language:
- ko
library_name: transformers
---
# 혐오표현 분류
tag 0: 혐오
tag 1: 일반
# 소스코드
https://github.com/hegelty/hate-classifier
# 데이터셋
https://github.com/smilegate-ai/korean_unsmile_dataset
|
smjain/kishor | smjain | 2023-09-18T02:13:50Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-18T02:13:44Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks kishor kumar
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
ys7yoo/sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep3_ckpt | ys7yoo | 2023-09-18T02:00:08Z | 121 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T01:38:19Z | ---
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep3_ckpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep3_ckpt
This model was trained from scratch on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3260
- Mse: 0.3260
- Mae: 0.4173
- R2: 0.8507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.9158 | 1.0 | 183 | 0.4483 | 0.4483 | 0.4964 | 0.7947 |
| 0.1335 | 2.0 | 366 | 0.3875 | 0.3875 | 0.4620 | 0.8226 |
| 0.0964 | 3.0 | 549 | 0.3260 | 0.3260 | 0.4173 | 0.8507 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
wu981526092/Sentence-Level-Stereotype-Detector | wu981526092 | 2023-09-18T01:49:58Z | 15,593 | 4 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:stereoset",
"dataset:crows_pairs",
"dataset:wu981526092/MGSD",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-29T16:02:37Z | ---
license: mit
datasets:
- stereoset
- crows_pairs
- wu981526092/MGSD
language:
- en
metrics:
- f1
- recall
- precision
- accuracy
---
# Sentence-Level Stereotype Classifier
The Sentence-Level Stereotype Classifier is a transformer-based model developed to detect and classify different types of stereotypes present in the text at the sentence level. It is designed to recognize stereotypical and anti-stereotypical stereotypes towards gender, race, profession, and religion. The model can help in developing applications aimed at mitigating Stereotypical language use and promoting fairness and inclusivity in natural language processing tasks.
## Model Architecture
The model is built using the pre-trained Distilbert model. It is fine-tuned on MGS Dataset for the task of sentence-level stereotype classification.
## Classes
The model identifies nine classes, including:
0. unrelated: The token does not indicate any stereotype.
1. stereotype_gender: The token indicates a gender stereotype.
2. anti-stereotype_gender: The token indicates an anti-gender stereotype.
3. stereotype_race: The token indicates a racial stereotype.
4. anti-stereotype_race: The token indicates an anti-racial stereotype.
5. stereotype_profession: The token indicates a professional stereotype.
6. anti-stereotype_profession: The token indicates an anti-professional stereotype.
7. stereotype_religion: The token indicates a religious stereotype.
8. anti-stereotype_religion: The token indicates an anti-religious stereotype.
## Usage
The model can be used as a part of the Hugging Face's pipeline for Text Classification.
```python
from transformers import pipeline
nlp = pipeline("text-classification", model="wu981526092/Sentence-Level-Stereotype-Detector", tokenizer="wu981526092/Sentence-Level-Stereotype-Detector")
result = nlp("Text containing potential stereotype...")
print(result)
``` |
wu981526092/Token-Level-Stereotype-Detector | wu981526092 | 2023-09-18T01:48:45Z | 110 | 2 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:stereoset",
"dataset:crows_pairs",
"dataset:wu981526092/MGSD",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-06-24T10:21:27Z | ---
license: mit
datasets:
- stereoset
- crows_pairs
- wu981526092/MGSD
language:
- en
metrics:
- f1
- recall
- precision
- accuracy
---
# Token-Level Stereotype Classifier
The Token-Level Stereotype Classifier is a transformer-based model developed to detect and classify different types of stereotypes present in the text at the token level. It is designed to recognize stereotypical and anti-stereotypical stereotypes towards gender, race, profession, and religion. The model can help in developing applications aimed at mitigating stereotypical language use and promoting fairness and inclusivity in natural language processing tasks.
## Model Architecture
The model is built using the pretrained Distilbert model. It is fine-tuned on MGS Dataset for the task of token-level classification.
## Classes
The model identifies nine classes, including:
1. unrelated: The token does not indicate any stereotype.
2. stereotype_gender: The token indicates a gender stereotype.
3. anti-stereotype_gender: The token indicates an anti-gender stereotype.
4. stereotype_race: The token indicates a racial stereotype.
5. anti-stereotype_race: The token indicates an anti-racial stereotype.
6. stereotype_profession: The token indicates a professional stereotype.
7. anti-stereotype_profession: The token indicates an anti-professional stereotype.
8. stereotype_religion: The token indicates a religious stereotype.
9. anti-stereotype_religion: The token indicates an anti-religious stereotype.
## Usage
The model can be used as a part of the Hugging Face's pipeline for Named Entity Recognition (NER).
```python
from transformers import pipeline
nlp = pipeline("ner", model="wu981526092/Token-Level-Stereotype-Detector", tokenizer="wu981526092/Token-Level-Stereotype-Detector")
result = nlp("Text containing potential stereotype...")
print(result)
``` |
CreatorPhan/Bloomz_Lora | CreatorPhan | 2023-09-18T01:38:45Z | 3 | 0 | peft | [
"peft",
"tensorboard",
"region:us"
]
| null | 2023-09-14T14:46:28Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Sumit6597/LLM-peftAdapter | Sumit6597 | 2023-09-18T01:29:01Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T01:28:58Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
KETI-AIR-Downstream/long-ke-t5-base-summarization_e10 | KETI-AIR-Downstream | 2023-09-18T01:28:33Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:jsonl_dataset_sum.py",
"base_model:KETI-AIR/long-ke-t5-base",
"base_model:finetune:KETI-AIR/long-ke-t5-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-06-05T04:24:59Z | ---
tags:
- generated_from_trainer
datasets:
- jsonl_dataset_sum.py
metrics:
- rouge
widget:
- text: 'summarization-num_lines-1: 현대자동차는 18일(현지 시간) 이탈리아 레이크 코모에서 개최된 ''현대 리유니온''
행사에서 ''포니 쿠페 콘셉트'' 복원 모델을 세계에 첫 공개했습니다. 이 프로젝트는 현대차의 창업자인 정주영 선대 회장의 수출보국(輸出報國)
정신과 포니 쿠페를 통한 글로벌 브랜드 정립에 대한 끊임없는 열정과 도전 정신을 재조명하기 위한 것입니다. 현대차에 따르면, 이번 현대 리유니온
행사는 회사의 역사를 다시 돌아보며 변하지 않는 미래 지향적인 비전과 방향성을 공유하는 브랜드 유산 행사입니다.'
example_title: sample 1
base_model: KETI-AIR/long-ke-t5-base
model-index:
- name: summarization_all
results:
- task:
type: summarization
name: Summarization
dataset:
name: jsonl_dataset_sum.py
type: jsonl_dataset_sum.py
config: 'null'
split: None
metrics:
- type: rouge
value: 21.9857
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization_all
This model is a fine-tuned version of [KETI-AIR/long-ke-t5-base](https://huggingface.co/KETI-AIR/long-ke-t5-base) on the jsonl_dataset_sum.py dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1442
- Rouge1: 21.9857
- Rouge2: 10.2876
- Rougel: 21.4026
- Rougelsum: 21.4278
- Gen Len: 86.2560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.2503 | 1.0 | 184670 | 1.2439 | 20.2525 | 9.1467 | 19.7454 | 19.771 | 87.1766 |
| 1.1629 | 2.0 | 369340 | 1.1773 | 21.0068 | 9.6691 | 20.4565 | 20.4888 | 89.6074 |
| 1.1087 | 3.0 | 554010 | 1.1431 | 21.0216 | 9.6545 | 20.489 | 20.5108 | 85.5895 |
| 1.056 | 4.0 | 738680 | 1.1247 | 21.6776 | 10.1424 | 21.09 | 21.1168 | 89.6576 |
| 1.0199 | 5.0 | 923350 | 1.1179 | 21.6563 | 10.0965 | 21.0814 | 21.1056 | 89.2454 |
| 0.9652 | 6.0 | 1108020 | 1.1122 | 21.6209 | 10.0725 | 21.0623 | 21.0864 | 86.7079 |
| 0.92 | 7.0 | 1292690 | 1.1136 | 21.9396 | 10.2734 | 21.3465 | 21.3745 | 86.5547 |
| 0.8804 | 8.0 | 1477360 | 1.1228 | 21.8457 | 10.1858 | 21.2552 | 21.278 | 87.6413 |
| 0.8447 | 9.0 | 1662030 | 1.1327 | 21.92 | 10.2635 | 21.3415 | 21.3633 | 86.4453 |
| 0.7678 | 10.0 | 1846700 | 1.1442 | 21.9857 | 10.2876 | 21.4026 | 21.4278 | 86.2560 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
KETI-AIR-Downstream/long-ke-t5-base-summarization | KETI-AIR-Downstream | 2023-09-18T01:28:19Z | 124 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"ko",
"dataset:jsonl_dataset_sum.py",
"base_model:KETI-AIR/long-ke-t5-base",
"base_model:finetune:KETI-AIR/long-ke-t5-base",
"license:artistic-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-05-19T03:24:32Z | ---
language:
- ko
license: artistic-2.0
tags:
- generated_from_trainer
datasets:
- jsonl_dataset_sum.py
metrics:
- rouge
widget:
- text: 'summarization-num_lines-1: 현대자동차는 18일(현지 시간) 이탈리아 레이크 코모에서 개최된 ''현대 리유니온''
행사에서 ''포니 쿠페 콘셉트'' 복원 모델을 세계에 첫 공개했습니다. 이 프로젝트는 현대차의 창업자인 정주영 선대 회장의 수출보국(輸出報國)
정신과 포니 쿠페를 통한 글로벌 브랜드 정립에 대한 끊임없는 열정과 도전 정신을 재조명하기 위한 것입니다. 현대차에 따르면, 이번 현대 리유니온
행사는 회사의 역사를 다시 돌아보며 변하지 않는 미래 지향적인 비전과 방향성을 공유하는 브랜드 유산 행사입니다.'
example_title: sample 1
base_model: KETI-AIR/long-ke-t5-base
model-index:
- name: summarization_all
results:
- task:
type: summarization
name: Summarization
dataset:
name: jsonl_dataset_sum.py
type: jsonl_dataset_sum.py
config: 'null'
split: None
metrics:
- type: rouge
value: 21.7197
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization_all
This model is a fine-tuned version of [KETI-AIR/long-ke-t5-base](https://huggingface.co/KETI-AIR/long-ke-t5-base) on the jsonl_dataset_sum.py dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0758
- Rouge1: 21.7197
- Rouge2: 10.1392
- Rougel: 21.1499
- Rougelsum: 21.173
- Gen Len: 87.4589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.2171 | 1.0 | 184670 | 1.2070 | 20.611 | 9.2868 | 20.0833 | 20.1095 | 87.4065 |
| 1.0916 | 2.0 | 369340 | 1.1190 | 21.3264 | 9.8656 | 20.7683 | 20.8005 | 88.0284 |
| 0.9823 | 3.0 | 554010 | 1.0758 | 21.7197 | 10.1392 | 21.1499 | 21.173 | 87.4589 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.0
- Datasets 2.8.0
- Tokenizers 0.13.2 |
KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-en2ko | KETI-AIR-Downstream | 2023-09-18T01:27:39Z | 159 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"translation",
"en",
"ko",
"base_model:KETI-AIR/long-ke-t5-base",
"base_model:finetune:KETI-AIR/long-ke-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-04-28T14:19:27Z | ---
language:
- en
- ko
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- KETI-AIR/aihub_koenzh_food_translation,KETI-AIR/aihub_scitech_translation,KETI-AIR/aihub_scitech20_translation,KETI-AIR/aihub_socialtech20_translation,KETI-AIR/aihub_spoken_language_translation
metrics:
- bleu
pipeline_tag: translation
widget:
- text: 'translate_en2ko: The Seoul Metropolitan Government said Wednesday that it
would develop an AI-based congestion monitoring system to provide better information
to passengers about crowd density at each subway station.'
example_title: Sample 1
- text: 'translate_en2ko: According to Seoul Metro, the operator of the subway service
in Seoul, the new service will help analyze the real-time flow of passengers and
crowd levels in subway compartments, improving operational efficiency.'
example_title: Sample 2
base_model: KETI-AIR/long-ke-t5-base
model-index:
- name: en2ko
results:
- task:
type: translation
name: Translation
dataset:
name: KETI-AIR/aihub_koenzh_food_translation,KETI-AIR/aihub_scitech_translation,KETI-AIR/aihub_scitech20_translation,KETI-AIR/aihub_socialtech20_translation,KETI-AIR/aihub_spoken_language_translation
koen,none,none,none,none
type: KETI-AIR/aihub_koenzh_food_translation,KETI-AIR/aihub_scitech_translation,KETI-AIR/aihub_scitech20_translation,KETI-AIR/aihub_socialtech20_translation,KETI-AIR/aihub_spoken_language_translation
args: koen,none,none,none,none
metrics:
- type: bleu
value: 42.463
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en2ko
This model is a fine-tuned version of [KETI-AIR/long-ke-t5-base](https://huggingface.co/KETI-AIR/long-ke-t5-base) on the KETI-AIR/aihub_koenzh_food_translation,KETI-AIR/aihub_scitech_translation,KETI-AIR/aihub_scitech20_translation,KETI-AIR/aihub_socialtech20_translation,KETI-AIR/aihub_spoken_language_translation koen,none,none,none,none dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6000
- Bleu: 42.463
- Gen Len: 30.6512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 0.6989 | 1.0 | 93762 | 0.6666 | 20.3697 | 18.1258 |
| 0.6143 | 2.0 | 187524 | 0.6181 | 21.2903 | 18.1428 |
| 0.5544 | 3.0 | 281286 | 0.6000 | 21.9763 | 18.1424 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.0
- Datasets 2.8.0
- Tokenizers 0.13.2 |
KETI-AIR/ke-t5-base-newslike | KETI-AIR | 2023-09-18T01:26:52Z | 128 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
language: [ko, en]
tags:
- t5
eos_token: "</s>"
widget:
- text: 아버지가 방에 들어가신다.</s>
---
# ke-t5 base
Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details.
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("KETI-AIR/ke-t5-base-newslike")
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-base-newslike")
```
## BibTeX entry and citation info
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
``` |
KETI-AIR/ke-t5-large-newslike | KETI-AIR | 2023-09-18T01:26:20Z | 13 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
language: [ko, en]
tags:
- t5
eos_token: "</s>"
widget:
- text: 아버지가 방에 들어가신다.</s>
---
# ke-t5 base
Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details.
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("KETI-AIR/ke-t5-large-newslike")
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-large-newslike")
```
## BibTeX entry and citation info
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
``` |
KETI-AIR/ke-t5-large | KETI-AIR | 2023-09-18T01:24:55Z | 102 | 8 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
language: [en, ko]
tags:
- t5
eos_token: "</s>"
widget:
- text: 아버지가 방에 들어가신다.</s>
---
# ke-t5 base
Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details.
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("KETI-AIR/ke-t5-large")
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-large")
```
## BibTeX entry and citation info
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
``` |
KETI-AIR/ke-t5-small-ko | KETI-AIR | 2023-09-18T01:24:46Z | 236 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
language: ko
tags:
- t5
eos_token: "</s>"
widget:
- text: 아버지가 방에 들어가신다.</s>
---
# ke-t5 base
Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details.
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("KETI-AIR/ke-t5-small-ko")
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-small-ko")
```
## BibTeX entry and citation info
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
``` |
KETI-AIR/ke-t5-base-ko | KETI-AIR | 2023-09-18T01:24:34Z | 378 | 7 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"ko",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z |
---
language: ko
license: apache-2.0
tags:
- t5
eos_token: </s>
widget:
- text: 아버지가 방에 들어가신다.</s>
---
# Model Card for ke-t5-base-ko
# Model Details
## Model Description
- **Developed by:** Korea Electronics Technology Institute Artificial Intelligence Research Center
- **Shared by [Optional]:** More information needed
- **Model type:** Text2Text Generation
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Related Models:**
- **Parent Model:** T5
- **Resources for more information:**
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- [KE-T5 Github Repo](https://github.com/AIRC-KETI/ke-t5)
- [Paper](https://aclanthology.org/2021.findings-emnlp.33/)
- [Associated Paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
# Uses
## Direct Use
This model can be used for the task of Text2Text Generation
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
See the [t5-base model card](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) for further information.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
```
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
```
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Korea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-base-ko")
model = AutoModelForSeq2SeqLM.from_pretrained("KETI-AIR/ke-t5-base-ko")
```
</details>
|
KETI-AIR/ke-t5-base | KETI-AIR | 2023-09-18T01:24:23Z | 1,448 | 22 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"ko",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z |
---
language:
- en
- ko
license: apache-2.0
tags:
- t5
eos_token: </s>
widget:
- text: 아버지가 방에 들어가신다.</s>
---
# Model Card for ke-t5-base
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Base is the checkpoint with 220 million parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
- **Shared by [Optional]:** Korea Electronics Technology Institute Artificial Intelligence Research Center
- **Model type:** Text Generation
- **Language(s) (NLP):**More information needed
- **License:** More information needed
- **Related Models:**
- **Parent Model:** T5
- **Resources for more information:**
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- [KE-T5 Github Repo](https://github.com/AIRC-KETI/ke-t5)
- [Paper](https://aclanthology.org/2021.findings-emnlp.33/)
- [Associated Paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
# Uses
## Direct Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
See the [t5-base model card](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) for further information.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
### Factors
More information needed
### Metrics
More information needed
## Results
For full results for T5-Base, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
```
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Korea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("KETI-AIR/ke-t5-base")
```
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples.
</details>
|
ys7yoo/sts_roberta-large_lr1e-05_wd1e-03_ep3_ckpt | ys7yoo | 2023-09-18T01:22:38Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:klue/roberta-large",
"base_model:finetune:klue/roberta-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T01:10:48Z | ---
base_model: klue/roberta-large
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: sts_roberta-large_lr1e-05_wd1e-03_ep3_ckpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sts_roberta-large_lr1e-05_wd1e-03_ep3_ckpt
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4129
- Mse: 0.4129
- Mae: 0.4750
- R2: 0.8109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 1.3435 | 1.0 | 183 | 0.3891 | 0.3891 | 0.4693 | 0.8218 |
| 0.1449 | 2.0 | 366 | 0.5301 | 0.5301 | 0.5456 | 0.7572 |
| 0.1059 | 3.0 | 549 | 0.4129 | 0.4129 | 0.4750 | 0.8109 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Navu45/neon_sd_model | Navu45 | 2023-09-18T01:14:45Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-18T00:02:58Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Navu45/neon_sd_model
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Navu45/neon_dreambooth dataset. You can find some example images in the following.




|
ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3_ckpt | ys7yoo | 2023-09-18T01:08:41Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:klue/roberta-large",
"base_model:finetune:klue/roberta-large",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T00:46:19Z | ---
base_model: klue/roberta-large
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- accuracy
- f1
model-index:
- name: nli_roberta-large_lr1e-05_wd1e-03_ep3_ckpt
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
config: nli
split: validation
args: nli
metrics:
- name: Accuracy
type: accuracy
value: 0.9026666666666666
- name: F1
type: f1
value: 0.9025716877431428
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli_roberta-large_lr1e-05_wd1e-03_ep3_ckpt
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3425
- Accuracy: 0.9027
- F1: 0.9026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5725 | 1.0 | 391 | 0.3381 | 0.8813 | 0.8811 |
| 0.2182 | 2.0 | 782 | 0.3055 | 0.898 | 0.8979 |
| 0.112 | 3.0 | 1173 | 0.3425 | 0.9027 | 0.9026 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
penguinman73/distilbert-base-uncased-finetuned-clinc | penguinman73 | 2023-09-18T00:33:44Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-17T05:21:22Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9158064516129032
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7724
- Accuracy: 0.9158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2762 | 0.7284 |
| 3.7824 | 2.0 | 636 | 1.8624 | 0.8358 |
| 3.7824 | 3.0 | 954 | 1.1512 | 0.8984 |
| 1.6858 | 4.0 | 1272 | 0.8540 | 0.9132 |
| 0.8983 | 5.0 | 1590 | 0.7724 | 0.9158 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/kurosaki_chitose_idolmastercinderellagirls | CyberHarem | 2023-09-18T00:33:36Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/kurosaki_chitose_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-18T00:12:47Z | ---
license: mit
datasets:
- CyberHarem/kurosaki_chitose_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of kurosaki_chitose_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5720, you need to download `5720/kurosaki_chitose_idolmastercinderellagirls.pt` as the embedding and `5720/kurosaki_chitose_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5720**, with the score of 0.980. The trigger words are:
1. `kurosaki_chitose_idolmastercinderellagirls`
2. `blonde_hair, long_hair, bangs, red_eyes, smile, hair_between_eyes, breasts, blush, hairband`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.964 | [Download](7800/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/pattern_12.png) |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7280 | 0.969 | [Download](7280/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/pattern_12.png) |  | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.954 | [Download](6760/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/pattern_12.png) |  | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.969 | [Download](6240/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/pattern_12.png) |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| **5720** | **0.980** | [**Download**](5720/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/pattern_12.png) |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.962 | [Download](5200/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/pattern_12.png) |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.964 | [Download](4680/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/pattern_12.png) |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.978 | [Download](4160/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/pattern_12.png) |  | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.971 | [Download](3640/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/pattern_12.png) |  | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.932 | [Download](3120/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/pattern_12.png) |  | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.967 | [Download](2600/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/pattern_12.png) |  | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.956 | [Download](2080/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/pattern_12.png) |  | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.958 | [Download](1560/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/pattern_12.png) |  | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.929 | [Download](1040/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/pattern_12.png) |  | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.945 | [Download](520/kurosaki_chitose_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/pattern_12.png) |  | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
Evan-Lin/yelp-attractive-keyword-1 | Evan-Lin | 2023-09-18T00:07:04Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| reinforcement-learning | 2023-09-17T10:03:06Z | ---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpypedqoes/Evan-Lin/yelp-attractive-keyword-1")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpypedqoes/Evan-Lin/yelp-attractive-keyword-1")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpypedqoes/Evan-Lin/yelp-attractive-keyword-1")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
gilbertoesp/distilroberta-base-mrpc-glue | gilbertoesp | 2023-09-17T23:52:53Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-17T23:14:11Z | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-mrpc-glue
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8431372549019608
- name: F1
type: f1
value: 0.8819188191881918
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4591
- Accuracy: 0.8431
- F1: 0.8819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5109 | 1.09 | 500 | 0.4591 | 0.8431 | 0.8819 |
| 0.3406 | 2.18 | 1000 | 0.5950 | 0.8652 | 0.8995 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
jonas-luehrs/bert-base-german-cased-MLM-eu-or-ddr | jonas-luehrs | 2023-09-17T23:50:50Z | 127 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-17T23:31:02Z | ---
license: mit
base_model: bert-base-german-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-german-cased-MLM-eu-or-ddr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-MLM-eu-or-ddr
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0187 | 1.0 | 391 | 1.5006 |
| 1.5353 | 2.0 | 782 | 1.3764 |
| 1.4279 | 3.0 | 1173 | 1.3219 |
| 1.3776 | 4.0 | 1564 | 1.2894 |
| 1.3535 | 5.0 | 1955 | 1.2683 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
torkable/ppo-CartPole-v1 | torkable | 2023-09-17T23:47:15Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-17T23:46:56Z | ---
library_name: stable-baselines3
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **CartPole-v1**
This is a trained model of a **PPO** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anktiwerize/llama-2-7b-hf-small-shards-redditcasualtest | anktiwerize | 2023-09-17T23:46:28Z | 5 | 0 | peft | [
"peft",
"pytorch",
"llama",
"text-generation",
"dataset:anktiwerize/redditcasualtest",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-09-17T10:23:39Z | ---
library_name: peft
license: apache-2.0
datasets:
- anktiwerize/redditcasualtest
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0 |
kamaludeen/medicaldoc-llama2 | kamaludeen | 2023-09-17T23:37:39Z | 0 | 0 | peft | [
"peft",
"pytorch",
"text-generation",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-09-16T12:48:20Z | ---
library_name: peft
license: apache-2.0
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0 |
hansin91/image_classification | hansin91 | 2023-09-17T23:17:50Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-13T08:21:17Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2378
- Accuracy: 0.5875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 2.0656 | 0.125 |
| No log | 2.0 | 80 | 2.0558 | 0.1938 |
| No log | 3.0 | 120 | 2.0177 | 0.2375 |
| No log | 4.0 | 160 | 1.9156 | 0.3438 |
| No log | 5.0 | 200 | 1.7849 | 0.3063 |
| No log | 6.0 | 240 | 1.6961 | 0.3187 |
| No log | 7.0 | 280 | 1.6026 | 0.3937 |
| No log | 8.0 | 320 | 1.5455 | 0.3688 |
| No log | 9.0 | 360 | 1.4723 | 0.4562 |
| No log | 10.0 | 400 | 1.3931 | 0.5 |
| No log | 11.0 | 440 | 1.4418 | 0.4375 |
| No log | 12.0 | 480 | 1.3306 | 0.4437 |
| 1.5855 | 13.0 | 520 | 1.2437 | 0.575 |
| 1.5855 | 14.0 | 560 | 1.3712 | 0.4875 |
| 1.5855 | 15.0 | 600 | 1.2102 | 0.55 |
| 1.5855 | 16.0 | 640 | 1.3217 | 0.5188 |
| 1.5855 | 17.0 | 680 | 1.3656 | 0.4938 |
| 1.5855 | 18.0 | 720 | 1.3261 | 0.525 |
| 1.5855 | 19.0 | 760 | 1.5611 | 0.4625 |
| 1.5855 | 20.0 | 800 | 1.4503 | 0.5125 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
barto17/distilhubert-finetuned-gtzan | barto17 | 2023-09-17T23:10:43Z | 166 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-09-17T21:22:04Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.85
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6087
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.007 | 1.0 | 113 | 1.8377 | 0.4 |
| 1.3132 | 2.0 | 226 | 1.2420 | 0.62 |
| 1.0222 | 3.0 | 339 | 0.9306 | 0.76 |
| 0.8859 | 4.0 | 452 | 0.8253 | 0.73 |
| 0.6842 | 5.0 | 565 | 0.6612 | 0.78 |
| 0.3738 | 6.0 | 678 | 0.6719 | 0.79 |
| 0.421 | 7.0 | 791 | 0.6380 | 0.83 |
| 0.1587 | 8.0 | 904 | 0.5500 | 0.86 |
| 0.1807 | 9.0 | 1017 | 0.5794 | 0.85 |
| 0.1573 | 10.0 | 1130 | 0.6087 | 0.85 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Jzuluaga/bert-base-token-classification-for-atc-en-uwb-atcc | Jzuluaga | 2023-09-17T23:06:50Z | 133 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"text",
"en-atc",
"en",
"generated_from_trainer",
"bertraffic",
"dataset:Jzuluaga/uwb_atcc",
"arxiv:2110.05781",
"arxiv:2211.04054",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-11-30T09:43:41Z | ---
language: en
license: apache-2.0
tags:
- text
- token-classification
- en-atc
- en
- generated_from_trainer
- bert
- bertraffic
datasets:
- Jzuluaga/uwb_atcc
metrics:
- Precision
- Recall
- Accuracy
- F1
- Jaccard Error Rate
widget:
- text: lining up runway three one csa five bravo easy five three kilo romeo contact
ruzyne ground one two one decimal nine good bye
- text: csa seven three two zero so change of taxi quality eight nine sierra we need
to full length britair five nine zero bravo contact ruzyne ground one two one
decimal nine good bye
- text: swiss four six one foxtrot line up runway three one and wait one two one nine
csa four yankee alfa
- text: tower klm five five tango ils three one wizz air four papa uniform tower roger
base_model: bert-base-uncased
model-index:
- name: bert-base-token-classification-for-atc-en-uwb-atcc
results:
- task:
type: token-classification
name: chunking
dataset:
name: UWB-ATCC corpus (Air Traffic Control Communications)
type: Jzuluaga/uwb_atcc
config: test
split: test
metrics:
- type: F1
value: 0.87
name: TEST F1 (macro)
verified: false
- type: Accuracy
value: 0.91
name: TEST Accuracy
verified: false
- type: Precision
value: 0.86
name: TEST Precision (macro)
verified: false
- type: Recall
value: 0.88
name: TEST Recall (macro)
verified: false
- type: Jaccard Error Rate
value: 0.169
name: TEST Jaccard Error Rate
verified: false
---
# bert-base-token-classification-for-atc-en-uwb-atcc
This model allow to detect speaker roles and speaker changes based on text. Normally, this task is done on the acoustic level. However, we propose to perform this task on the text level.
We solve this challenge by performing speaker role and change detection with a BERT model. We fine-tune it on the chunking task (token-classification).
For instance:
- Speaker 1: **lufthansa six two nine charlie tango report when established**
- Speaker 2: **report when established lufthansa six two nine charlie tango**
Based on that, could you tell the speaker role? Is it speaker 1 air traffic controller or pilot?
Also, if you have a recording with 2 or more speakers, like this:
- Recording with 2 or more segments: **report when established lufthansa six two nine charlie tango lufthansa six two nine charlie tango report when established**
could you tell when the first speaker ends and when the second starts? This is basically diarization plus speaker role detection.
Check the inference API (there are3 examples)!
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [UWB-ATCC corpus](https://huggingface.co/datasets/Jzuluaga/uwb_atcc).
<a href="https://github.com/idiap/bert-text-diarization-atc">
<img alt="GitHub" src="https://img.shields.io/badge/GitHub-Open%20source-green\">
</a>
It achieves the following results on the evaluation set:
- Loss: 0.0098
- Precision: 0.9760
- Recall: 0.9741
- F1: 0.9750
- Accuracy: 0.9965
Paper: [BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications](https://arxiv.org/abs/2110.05781).
Authors: Juan Zuluaga-Gomez, Seyyed Saeed Sarfjoo, Amrutha Prasad, Iuliia Nigmatulina, Petr Motlicek, Karel Ondrej, Oliver Ohneiser, Hartmut Helmke
Abstract: Automatic speech recognition (ASR) allows transcribing the communications between air traffic controllers (ATCOs) and aircraft pilots. The transcriptions are used later to extract ATC named entities, e.g., aircraft callsigns. One common challenge is speech activity detection (SAD) and speaker diarization (SD). In the failure condition, two or more segments remain in the same recording, jeopardizing the overall performance. We propose a system that combines SAD and a BERT model to perform speaker change detection and speaker role detection (SRD) by chunking ASR transcripts, i.e., SD with a defined number of speakers together with SRD. The proposed model is evaluated on real-life public ATC databases. Our BERT SD model baseline reaches up to 10% and 20% token-based Jaccard error rate (JER) in public and private ATC databases. We also achieved relative improvements of 32% and 7.7% in JERs and SD error rate (DER), respectively, compared to VBx, a well-known SD system.
Code — GitHub repository: https://github.com/idiap/bert-text-diarization-atc
## Intended uses & limitations
This model was fine-tuned on air traffic control data. We don't expect that it keeps the same performance on some others datasets where BERT was pre-trained or fine-tuned.
## Training and evaluation data
See Table 3 (page 5) in our paper:[BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications](https://arxiv.org/abs/2110.05781).. We described there the data used to fine-tune or model for speaker role and speaker change detection.
- We use the UWB-ATCC corpus to fine-tune this model. You can download the raw data here: https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0
- However, do not worry, we have prepared a script in our repository for preparing this databases:
- Dataset preparation folder: https://github.com/idiap/bert-text-diarization-atc/tree/main/data/databases/uwb_atcc
- Prepare the data: https://github.com/idiap/bert-text-diarization-atc/blob/main/data/databases/uwb_atcc/data_prepare_uwb_atcc_corpus.sh
- Get the data in the format required by HuggingFace: https://github.com/idiap/bert-text-diarization-atc/blob/main/data/databases/uwb_atcc/exp_prepare_uwb_atcc_corpus.sh
## Writing your own inference script
The snippet of code:
```python
from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jzuluaga/bert-base-token-classification-for-atc-en-uwb-atcc")
model = AutoModelForTokenClassification.from_pretrained("Jzuluaga/bert-base-token-classification-for-atc-en-uwb-atcc")
##### Process text sample (from UWB-ATCC)
from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp("lining up runway three one csa five bravo b easy five three kilo romeo contact ruzyne ground one two one decimal nine good bye)
[{'entity_group': 'pilot',
'score': 0.99991554,
'word': 'lining up runway three one csa five bravo b', 'start': 0, 'end': 43
},
{'entity_group': 'atco',
'score': 0.99994576,
'word': 'easy five three kilo romeo contact ruzyne ground one two one decimal nine good bye', 'start': 44, 'end': 126
}]
```
# Cite us
If you use this code for your research, please cite our paper with:
```
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.03 | 500 | 0.2282 | 0.6818 | 0.7001 | 0.6908 | 0.9246 |
| 0.3487 | 0.06 | 1000 | 0.1214 | 0.8163 | 0.8024 | 0.8093 | 0.9631 |
| 0.3487 | 0.1 | 1500 | 0.0933 | 0.8496 | 0.8544 | 0.8520 | 0.9722 |
| 0.1124 | 0.13 | 2000 | 0.0693 | 0.8845 | 0.8739 | 0.8791 | 0.9786 |
| 0.1124 | 0.16 | 2500 | 0.0540 | 0.8993 | 0.8911 | 0.8952 | 0.9817 |
| 0.0667 | 0.19 | 3000 | 0.0474 | 0.9058 | 0.8929 | 0.8993 | 0.9857 |
| 0.0667 | 0.23 | 3500 | 0.0418 | 0.9221 | 0.9245 | 0.9233 | 0.9865 |
| 0.0492 | 0.26 | 4000 | 0.0294 | 0.9369 | 0.9415 | 0.9392 | 0.9903 |
| 0.0492 | 0.29 | 4500 | 0.0263 | 0.9512 | 0.9446 | 0.9479 | 0.9911 |
| 0.0372 | 0.32 | 5000 | 0.0223 | 0.9495 | 0.9497 | 0.9496 | 0.9915 |
| 0.0372 | 0.35 | 5500 | 0.0212 | 0.9530 | 0.9514 | 0.9522 | 0.9923 |
| 0.0308 | 0.39 | 6000 | 0.0177 | 0.9585 | 0.9560 | 0.9572 | 0.9933 |
| 0.0308 | 0.42 | 6500 | 0.0169 | 0.9619 | 0.9613 | 0.9616 | 0.9936 |
| 0.0261 | 0.45 | 7000 | 0.0140 | 0.9689 | 0.9662 | 0.9676 | 0.9951 |
| 0.0261 | 0.48 | 7500 | 0.0130 | 0.9652 | 0.9629 | 0.9641 | 0.9945 |
| 0.0214 | 0.51 | 8000 | 0.0127 | 0.9676 | 0.9635 | 0.9656 | 0.9953 |
| 0.0214 | 0.55 | 8500 | 0.0109 | 0.9714 | 0.9708 | 0.9711 | 0.9959 |
| 0.0177 | 0.58 | 9000 | 0.0103 | 0.9740 | 0.9727 | 0.9734 | 0.9961 |
| 0.0177 | 0.61 | 9500 | 0.0101 | 0.9768 | 0.9744 | 0.9756 | 0.9963 |
| 0.0159 | 0.64 | 10000 | 0.0098 | 0.9760 | 0.9741 | 0.9750 | 0.9965 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
rahular/varta-t5 | rahular | 2023-09-17T22:49:45Z | 568 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"as",
"bh",
"bn",
"en",
"gu",
"hi",
"kn",
"ml",
"mr",
"ne",
"or",
"pa",
"ta",
"te",
"ur",
"dataset:rahular/varta",
"arxiv:2305.05858",
"arxiv:1912.08777",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-01-12T01:24:39Z | ---
license: apache-2.0
datasets:
- rahular/varta
language:
- as
- bh
- bn
- en
- gu
- hi
- kn
- ml
- mr
- ne
- or
- pa
- ta
- te
- ur
---
# Varta-T5
## Model Description
Varta-T5 is a model pre-trained on the `full` training set of [Varta](https://huggingface.co/datasets/rahular/varta) in 14 Indic languages (Assamese, Bhojpuri, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Tamil, Telugu, and Urdu) and English, using span corruption and gap-sentence generation as objectives.
[Varta](https://huggingface.co/datasets/rahular/varta) is a large-scale news corpus for Indic languages, including 41.8 million news articles in 14 different Indic languages (and English), which come from a variety of high-quality sources.
The dataset and the model are introduced in [this paper](https://arxiv.org/abs/2305.05858). The code is released in [this repository](https://github.com/rahular/varta).
## Uses
You can use this model for causal language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that the text-to-text framework allows us to use the same model on any NLP task, including text generation tasks (e.g., machine translation, document summarization, question answering), and classification tasks (e.g., sentiment analysis).
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This work is mainly dedicated to the curation of a new multilingual dataset for Indic languages, many of which are low-resource languages. During data collection, we face several limitations that can potentially result in ethical concerns. Some of the important ones are mentioned below: <br>
- Our dataset contains only those articles written by DailyHunt's partner publishers. This has the potential to result in a bias towards a particular narrative or ideology that can affect the representativeness and diversity of the dataset.
- Another limitation is the languages represented in Varta. Out of 22 languages with official status in India, our dataset has only 13. There are 122 major languages spoken by at least 10,000 people and 159 other languages which are extremely low-resourced. None of these languages are represented in our dataset.
- We do not perform any kind of debiasing on Varta. This means that societal and cultural biases may exist in the dataset, which can adversely affect the fairness and inclusivity of the models trained on it.
## How to Get Started with the Model
You can use this model directly for span in-filling.
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("rahular/varta-t5")
model = AutoModelForSeq2SeqLM.from_pretrained("rahular/varta-t5")
```
## Training Details
### Training Data
Varta contains 41.8 million high-quality news articles in 14 Indic languages and English.
With 34.5 million non-English article-headline pairs, it is the largest document-level dataset of its kind.
### Pretraining
- We use span corruption and gap-sentence generation as the pretraining objectives.
- Both objectives are sampled uniformly during pretraining.
- Span corruption is similar to masked language modeling except that instead of masking random tokens, we mask spans of tokens with an average length of 3.
- In gap-sentence prediction, whole sentences are masked instead of spans. We follow [the original work](https://arxiv.org/abs/1912.08777), and select sentences based on their `importance'.
- Rouge-1 F1-score between the sentence and the document is used as a proxy for importance.
- We use 0.15 and 0.2 as the masking ratios for span corruption and gap-sentence generation, respectively.
Since data sizes across languages in Varta vary from 1.5K (Bhojpuri) to 14.4M articles (Hindi), we use standard temperature-based sampling to upsample data when necessary.
- We pretrain Varta-T5 using the T5 1.1 base architecture with 12 encoder and decoder layers.
- We train with maximum sequence lengths of 512 and 256 for the encoder and decoder respectively.
- We use 12 attention heads with an embedding dimension of 768 and a feed-forward width of 2048.
- We use a 128K sentencepiece vocabulary.
- In total, the model has 395M parameters.
- The model is trained with Adafactor optimizer with a warm-up of 10K steps.
- We use an initial learning rate of 1e-3 and use square root decay till we reach 2M steps.
- We use an effective batch size of 256 and train the model on TPU v3-8 chips.
- The model takes 11 days to train.
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
### Evaluation Results
Please see [the paper](https://arxiv.org/pdf/2305.05858.pdf).
## Citation
```
@misc{aralikatte2023varta,
title={V\=arta: A Large-Scale Headline-Generation Dataset for Indic Languages},
author={Rahul Aralikatte and Ziling Cheng and Sumanth Doddapaneni and Jackie Chi Kit Cheung},
year={2023},
eprint={2305.05858},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
QuophyDzifa/Sentiment-Analysis-Model | QuophyDzifa | 2023-09-17T22:46:05Z | 28 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-03T16:25:12Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Sentiment-Analysis-Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment-Analysis-Model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6227
- F1 Score: 0.7304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7461 | 0.5 | 500 | 0.7528 | 0.6523 |
| 0.6845 | 1.0 | 1000 | 0.6425 | 0.7132 |
| 0.5729 | 1.5 | 1500 | 0.6463 | 0.7415 |
| 0.5674 | 2.0 | 2000 | 0.6227 | 0.7304 |
| 0.41 | 2.5 | 2500 | 0.9091 | 0.7335 |
| 0.4017 | 3.0 | 3000 | 0.8304 | 0.7360 |
| 0.2691 | 3.5 | 3500 | 1.2177 | 0.7202 |
| 0.3128 | 4.0 | 4000 | 1.1197 | 0.7376 |
| 0.197 | 4.5 | 4500 | 1.2951 | 0.7341 |
| 0.1887 | 5.0 | 5000 | 1.4508 | 0.7239 |
| 0.11 | 5.5 | 5500 | 1.5447 | 0.7203 |
| 0.1462 | 6.0 | 6000 | 1.4909 | 0.7383 |
| 0.0907 | 6.5 | 6500 | 1.4809 | 0.7332 |
| 0.089 | 7.0 | 7000 | 1.7191 | 0.7244 |
| 0.0613 | 7.5 | 7500 | 1.7725 | 0.7294 |
| 0.0665 | 8.0 | 8000 | 1.8083 | 0.7290 |
| 0.0458 | 8.5 | 8500 | 1.8297 | 0.7346 |
| 0.0395 | 9.0 | 9000 | 1.8853 | 0.7304 |
| 0.0287 | 9.5 | 9500 | 1.9684 | 0.7273 |
| 0.0204 | 10.0 | 10000 | 1.9919 | 0.7308 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Panchovix/airoboros-l2-70b-gpt4-1.4.1-safetensors | Panchovix | 2023-09-17T22:30:58Z | 13 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-12T20:22:40Z | ---
license: other
---
FP16 model of airoboros 70b 1.4.1 (https://huggingface.co/jondurbin/airoboros-l2-70b-gpt4-1.4.1) from .bin to . safetensors, to be used to quant on exllama2.
It can also be used to load faster at FP16 using transformers.
There is a script inside bin2safetensors folder, that you can use to convert .bin files into .safetensor ones for other models.
Also, I included 2 measurements.json to be used to quant. First one (called old) was made with https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated/blob/refs%2Fconvert%2Fparquet/default/train/0000.parquet and first exllamav2 version, and the second one is a cleaned pippa, with good formatting on 17/09/2023 exllamav2. |
CyberHarem/shirayuki_chiyo_idolmastercinderellagirls | CyberHarem | 2023-09-17T22:06:28Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/shirayuki_chiyo_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-17T21:43:51Z | ---
license: mit
datasets:
- CyberHarem/shirayuki_chiyo_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of shirayuki_chiyo_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4000, you need to download `4000/shirayuki_chiyo_idolmastercinderellagirls.pt` as the embedding and `4000/shirayuki_chiyo_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4000**, with the score of 0.950. The trigger words are:
1. `shirayuki_chiyo_idolmastercinderellagirls`
2. `short_hair, black_hair, bangs, purple_eyes, blunt_bangs, blush`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-----------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7500 | 0.867 | [Download](7500/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](7500/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7500/previews/pattern_10.png) |  | [<NSFW, click to see>](7500/previews/pattern_12.png) | [<NSFW, click to see>](7500/previews/pattern_13.png) | [<NSFW, click to see>](7500/previews/pattern_14.png) |  |  | [<NSFW, click to see>](7500/previews/bikini.png) | [<NSFW, click to see>](7500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) |  |  |
| 7000 | 0.877 | [Download](7000/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](7000/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7000/previews/pattern_10.png) |  | [<NSFW, click to see>](7000/previews/pattern_12.png) | [<NSFW, click to see>](7000/previews/pattern_13.png) | [<NSFW, click to see>](7000/previews/pattern_14.png) |  |  | [<NSFW, click to see>](7000/previews/bikini.png) | [<NSFW, click to see>](7000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) |  |  |
| 6500 | 0.938 | [Download](6500/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](6500/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6500/previews/pattern_10.png) |  | [<NSFW, click to see>](6500/previews/pattern_12.png) | [<NSFW, click to see>](6500/previews/pattern_13.png) | [<NSFW, click to see>](6500/previews/pattern_14.png) |  |  | [<NSFW, click to see>](6500/previews/bikini.png) | [<NSFW, click to see>](6500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) |  |  |
| 6000 | 0.946 | [Download](6000/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](6000/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/pattern_10.png) |  | [<NSFW, click to see>](6000/previews/pattern_12.png) | [<NSFW, click to see>](6000/previews/pattern_13.png) | [<NSFW, click to see>](6000/previews/pattern_14.png) |  |  | [<NSFW, click to see>](6000/previews/bikini.png) | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5500 | 0.946 | [Download](5500/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](5500/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5500/previews/pattern_10.png) |  | [<NSFW, click to see>](5500/previews/pattern_12.png) | [<NSFW, click to see>](5500/previews/pattern_13.png) | [<NSFW, click to see>](5500/previews/pattern_14.png) |  |  | [<NSFW, click to see>](5500/previews/bikini.png) | [<NSFW, click to see>](5500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) |  |  |
| 5000 | 0.939 | [Download](5000/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](5000/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5000/previews/pattern_10.png) |  | [<NSFW, click to see>](5000/previews/pattern_12.png) | [<NSFW, click to see>](5000/previews/pattern_13.png) | [<NSFW, click to see>](5000/previews/pattern_14.png) |  |  | [<NSFW, click to see>](5000/previews/bikini.png) | [<NSFW, click to see>](5000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) |  |  |
| 4500 | 0.924 | [Download](4500/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](4500/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4500/previews/pattern_10.png) |  | [<NSFW, click to see>](4500/previews/pattern_12.png) | [<NSFW, click to see>](4500/previews/pattern_13.png) | [<NSFW, click to see>](4500/previews/pattern_14.png) |  |  | [<NSFW, click to see>](4500/previews/bikini.png) | [<NSFW, click to see>](4500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) |  |  |
| **4000** | **0.950** | [**Download**](4000/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](4000/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4000/previews/pattern_10.png) |  | [<NSFW, click to see>](4000/previews/pattern_12.png) | [<NSFW, click to see>](4000/previews/pattern_13.png) | [<NSFW, click to see>](4000/previews/pattern_14.png) |  |  | [<NSFW, click to see>](4000/previews/bikini.png) | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| 3500 | 0.924 | [Download](3500/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](3500/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3500/previews/pattern_10.png) |  | [<NSFW, click to see>](3500/previews/pattern_12.png) | [<NSFW, click to see>](3500/previews/pattern_13.png) | [<NSFW, click to see>](3500/previews/pattern_14.png) |  |  | [<NSFW, click to see>](3500/previews/bikini.png) | [<NSFW, click to see>](3500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) |  |  |
| 3000 | 0.924 | [Download](3000/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](3000/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3000/previews/pattern_10.png) |  | [<NSFW, click to see>](3000/previews/pattern_12.png) | [<NSFW, click to see>](3000/previews/pattern_13.png) | [<NSFW, click to see>](3000/previews/pattern_14.png) |  |  | [<NSFW, click to see>](3000/previews/bikini.png) | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2500 | 0.927 | [Download](2500/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](2500/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2500/previews/pattern_10.png) |  | [<NSFW, click to see>](2500/previews/pattern_12.png) | [<NSFW, click to see>](2500/previews/pattern_13.png) | [<NSFW, click to see>](2500/previews/pattern_14.png) |  |  | [<NSFW, click to see>](2500/previews/bikini.png) | [<NSFW, click to see>](2500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) |  |  |
| 2000 | 0.946 | [Download](2000/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](2000/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2000/previews/pattern_10.png) |  | [<NSFW, click to see>](2000/previews/pattern_12.png) | [<NSFW, click to see>](2000/previews/pattern_13.png) | [<NSFW, click to see>](2000/previews/pattern_14.png) |  |  | [<NSFW, click to see>](2000/previews/bikini.png) | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1500 | 0.920 | [Download](1500/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](1500/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1500/previews/pattern_10.png) |  | [<NSFW, click to see>](1500/previews/pattern_12.png) | [<NSFW, click to see>](1500/previews/pattern_13.png) | [<NSFW, click to see>](1500/previews/pattern_14.png) |  |  | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) |  |  |
| 1000 | 0.914 | [Download](1000/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](1000/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1000/previews/pattern_10.png) |  | [<NSFW, click to see>](1000/previews/pattern_12.png) | [<NSFW, click to see>](1000/previews/pattern_13.png) | [<NSFW, click to see>](1000/previews/pattern_14.png) |  |  | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) |  |  |
| 500 | 0.764 | [Download](500/shirayuki_chiyo_idolmastercinderellagirls.zip) | [<NSFW, click to see>](500/previews/pattern_1.png) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](500/previews/pattern_10.png) |  | [<NSFW, click to see>](500/previews/pattern_12.png) | [<NSFW, click to see>](500/previews/pattern_13.png) | [<NSFW, click to see>](500/previews/pattern_14.png) |  |  | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) |  |  |
|
Subsets and Splits