modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 06:28:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 06:25:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
prince99/results1 | prince99 | 2023-09-18T10:40:17Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-13b-chat-hf",
"region:us"
]
| null | 2023-09-18T10:40:13Z | ---
base_model: meta-llama/Llama-2-13b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: results1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results1
This model is a fine-tuned version of [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
dvrkdvys/Ted_Cruz_G_157500 | dvrkdvys | 2023-09-18T10:13:47Z | 0 | 0 | null | [
"natural language generation",
"voice conversion",
"adversarial learning",
"license:openrail",
"region:us"
]
| null | 2023-09-18T10:07:47Z | ---
license: openrail
tags:
- natural language generation
- voice conversion
- adversarial learning
--- |
nxa277/falcon-7b_medichat_finetuned_final | nxa277 | 2023-09-18T10:12:35Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-08-20T17:08:19Z | ---
library_name: peft
---
# Fine-tuned Falcon-7B Model for Medical Diagnosis
## Model Details
### Model Description:
This model is a fine-tuned version of the Falcon-7B model. This model was fine-tuned on Gretel.ai's "Symtoms to Diagnosis" dataset, found at
the following link: https://huggingface.co/datasets/gretelai/symptom_to_diagnosis, in order to provide preliminary diagnoses based on the
symptom descriptions it is prompted with.
### Baseline Model:
For more details about the baseline Falcon-7B model, please see the following links:
1. https://huggingface.co/tiiuae/falcon-7b
2. https://huggingface.co/blog/falcon
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
thomas0104/whisper-large-v2-nan-tw-only-char | thomas0104 | 2023-09-18T09:58:56Z | 27 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"zh",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-01T08:08:34Z | ---
language:
- zh
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
base_model: openai/whisper-large-v2
model-index:
- name: Whisper large-v2 nan-tw only char
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0 nan-tw
type: mozilla-foundation/common_voice_11_0
config: nan-tw
split: test
args: nan-tw
metrics:
- type: wer
value: 45.37404580152672
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper large-v2 nan-tw only char
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 nan-tw dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0351
- Wer: 45.3740
- Cer: 45.4573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.6011 | 1.04 | 1000 | 1.1100 | 55.0229 | 55.2068 |
| 0.1773 | 2.08 | 2000 | 1.2055 | 58.6565 | 58.7685 |
| 0.015 | 3.13 | 3000 | 1.0932 | 48.6412 | 48.8077 |
| 0.0131 | 5.01 | 4000 | 1.0531 | 45.7099 | 45.8497 |
| 0.0001 | 6.05 | 5000 | 1.0351 | 45.3740 | 45.4573 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
anggtpd/emotion_recognition | anggtpd | 2023-09-18T09:58:50Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-14T12:40:05Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emotion_recognition
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.45625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_recognition
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6139
- Accuracy: 0.4562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 1.9416 | 0.3438 |
| 1.8445 | 2.0 | 10 | 1.8517 | 0.3937 |
| 1.8445 | 3.0 | 15 | 1.7436 | 0.3875 |
| 1.6748 | 4.0 | 20 | 1.6654 | 0.475 |
| 1.6748 | 5.0 | 25 | 1.6098 | 0.5062 |
| 1.5405 | 6.0 | 30 | 1.5734 | 0.4875 |
| 1.5405 | 7.0 | 35 | 1.5446 | 0.4938 |
| 1.4603 | 8.0 | 40 | 1.5415 | 0.4938 |
| 1.4603 | 9.0 | 45 | 1.5173 | 0.5062 |
| 1.4154 | 10.0 | 50 | 1.4983 | 0.5062 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bardsai/finance-sentiment-fr-base | bardsai | 2023-09-18T09:54:48Z | 728 | 5 | transformers | [
"transformers",
"pytorch",
"camembert",
"text-classification",
"financial-sentiment-analysis",
"sentiment-analysis",
"fr",
"dataset:datasets/financial_phrasebank",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T09:53:50Z | ---
language: fr
tags:
- text-classification
- financial-sentiment-analysis
- sentiment-analysis
datasets:
- datasets/financial_phrasebank
metrics:
- f1
- accuracy
- precision
- recall
widget:
- text: "Le chiffre d'affaires net a augmenté de 30 % pour atteindre 36 millions d'euros."
example_title: "Example 1"
- text: "Coup d'envoi du vendredi fou. Liste des promotions en magasin."
example_title: "Example 2"
- text: "Les actions de CDPROJEKT ont enregistré la plus forte baisse parmi les entreprises cotées au WSE."
example_title: "Example 3"
---
# Finance Sentiment FR (base)
Finance Sentiment FR (base) is a model based on [camembert-base](https://huggingface.co/camembert-base) for analyzing sentiment of French financial news. It was trained on the translated version of [Financial PhraseBank](https://www.researchgate.net/publication/251231107_Good_Debt_or_Bad_Debt_Detecting_Semantic_Orientations_in_Economic_Texts) by Malo et al. (20014) for 10 epochs on single RTX3090 gpu.
The model will give you a three labels: positive, negative and neutral.
## How to use
You can use this model directly with a pipeline for sentiment-analysis:
```python
from transformers import pipeline
nlp = pipeline("sentiment-analysis", model="bardsai/finance-sentiment-fr-base")
nlp("Le chiffre d'affaires net a augmenté de 30 % pour atteindre 36 millions d'euros.")
```
```bash
[{'label': 'positive', 'score': 0.9987998807375955}]
```
## Performance
| Metric | Value |
| --- | ----------- |
| f1 macro | 0.963 |
| precision macro | 0.959 |
| recall macro | 0.967 |
| accuracy | 0.971 |
| samples per second | 140.8 |
(The performance was evaluated on RTX 3090 gpu)
## Changelog
- 2023-09-18: Initial release
## About bards.ai
At bards.ai, we focus on providing machine learning expertise and skills to our partners, particularly in the areas of nlp, machine vision and time series analysis. Our team is located in Wroclaw, Poland. Please visit our website for more information: [bards.ai](https://bards.ai/)
Let us know if you use our model :). Also, if you need any help, feel free to contact us at [email protected]
|
Karsinogenic69/emotion_classification | Karsinogenic69 | 2023-09-18T09:53:45Z | 200 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-18T09:50:26Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emotion_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4512
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.4449 | 0.4688 |
| No log | 2.0 | 80 | 1.4457 | 0.4938 |
| No log | 3.0 | 120 | 1.3813 | 0.5563 |
| No log | 4.0 | 160 | 1.5903 | 0.4313 |
| No log | 5.0 | 200 | 1.4512 | 0.5 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/yusa_kozue_idolmastercinderellagirls | CyberHarem | 2023-09-18T09:40:26Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/yusa_kozue_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-18T09:22:35Z | ---
license: mit
datasets:
- CyberHarem/yusa_kozue_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yusa_kozue_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7800, you need to download `7800/yusa_kozue_idolmastercinderellagirls.pt` as the embedding and `7800/yusa_kozue_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7800**, with the score of 0.924. The trigger words are:
1. `yusa_kozue_idolmastercinderellagirls`
2. `blonde_hair, ahoge, blush, green_eyes, twintails, low_twintails, long_hair, open_mouth`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **7800** | **0.924** | [**Download**](7800/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](7800/previews/pattern_1.png) | [<NSFW, click to see>](7800/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/pattern_12.png) | [<NSFW, click to see>](7800/previews/bikini.png) | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7280 | 0.909 | [Download](7280/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](7280/previews/pattern_1.png) | [<NSFW, click to see>](7280/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/pattern_12.png) | [<NSFW, click to see>](7280/previews/bikini.png) | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.853 | [Download](6760/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](6760/previews/pattern_1.png) | [<NSFW, click to see>](6760/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/pattern_12.png) | [<NSFW, click to see>](6760/previews/bikini.png) | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.920 | [Download](6240/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](6240/previews/pattern_1.png) | [<NSFW, click to see>](6240/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/pattern_12.png) | [<NSFW, click to see>](6240/previews/bikini.png) | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.862 | [Download](5720/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](5720/previews/pattern_1.png) | [<NSFW, click to see>](5720/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/pattern_12.png) | [<NSFW, click to see>](5720/previews/bikini.png) | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.784 | [Download](5200/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](5200/previews/pattern_1.png) | [<NSFW, click to see>](5200/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/pattern_12.png) | [<NSFW, click to see>](5200/previews/bikini.png) | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.770 | [Download](4680/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](4680/previews/pattern_1.png) | [<NSFW, click to see>](4680/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/pattern_12.png) | [<NSFW, click to see>](4680/previews/bikini.png) | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.749 | [Download](4160/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](4160/previews/pattern_1.png) | [<NSFW, click to see>](4160/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/pattern_12.png) | [<NSFW, click to see>](4160/previews/bikini.png) | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.700 | [Download](3640/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](3640/previews/pattern_1.png) | [<NSFW, click to see>](3640/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/pattern_12.png) | [<NSFW, click to see>](3640/previews/bikini.png) | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.802 | [Download](3120/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](3120/previews/pattern_1.png) | [<NSFW, click to see>](3120/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/pattern_12.png) | [<NSFW, click to see>](3120/previews/bikini.png) | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.677 | [Download](2600/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](2600/previews/pattern_1.png) | [<NSFW, click to see>](2600/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/pattern_12.png) | [<NSFW, click to see>](2600/previews/bikini.png) | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.705 | [Download](2080/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](2080/previews/pattern_1.png) | [<NSFW, click to see>](2080/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/pattern_12.png) | [<NSFW, click to see>](2080/previews/bikini.png) | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.478 | [Download](1560/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](1560/previews/pattern_1.png) | [<NSFW, click to see>](1560/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/pattern_12.png) | [<NSFW, click to see>](1560/previews/bikini.png) | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.492 | [Download](1040/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](1040/previews/pattern_1.png) | [<NSFW, click to see>](1040/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/pattern_12.png) | [<NSFW, click to see>](1040/previews/bikini.png) | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.554 | [Download](520/yusa_kozue_idolmastercinderellagirls.zip) | [<NSFW, click to see>](520/previews/pattern_1.png) | [<NSFW, click to see>](520/previews/pattern_2.png) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/pattern_12.png) | [<NSFW, click to see>](520/previews/bikini.png) | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
kbbabu/flanT5_grammerly_ft | kbbabu | 2023-09-18T09:36:15Z | 4 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"dataset:grammarly/coedit",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-15T10:57:08Z | ---
license: apache-2.0
base_model: google/flan-t5-large
tags:
- generated_from_trainer
model-index:
- name: coedit-finetuned
results: []
datasets:
- grammarly/coedit
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# coedit-finetuned
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an CoEdit dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3 |
Vishal24/Llama-2-7b-chat-hf-fine-tuned-adapters | Vishal24 | 2023-09-18T09:35:24Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T08:11:24Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
dai152/1 | dai152 | 2023-09-18T09:31:54Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
]
| null | 2023-09-18T09:31:54Z | ---
license: bigcode-openrail-m
---
|
junaid20/llama-fine-tuned-qa | junaid20 | 2023-09-18T09:26:36Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:finetune:NousResearch/Llama-2-7b-chat-hf",
"region:us"
]
| null | 2023-09-18T09:18:02Z | ---
base_model: NousResearch/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: llama-fine-tuned-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-fine-tuned-qa
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
matelorg/q-FrozenLake-v1-4x4-noSlippery | matelorg | 2023-09-18T09:25:08Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-18T09:25:06Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="matelorg/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Naveen2910/Taxi-V3 | Naveen2910 | 2023-09-18T09:14:34Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-18T09:14:33Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Naveen2910/Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Lamurias/ppo-Pyramids | Lamurias | 2023-09-18T08:59:02Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-09-15T16:59:26Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Lamurias/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Charishma13/my_awesome_model | Charishma13 | 2023-09-18T08:48:22Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T07:35:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Cherishh/whisper-slu-1 | Cherishh | 2023-09-18T08:42:48Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T08:42:44Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
pmarar96/ddpm-celebahq-finetuned-butterflies-2epochs | pmarar96 | 2023-09-18T08:30:39Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2023-09-18T08:30:17Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('pmarar96/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
nickprock/xlm-roberta-base-banking77-classification | nickprock | 2023-09-18T08:30:35Z | 120 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-08-16T11:02:45Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- accuracy
widget:
- text: 'Can I track the card you sent to me? '
example_title: Card Arrival Example - English
- text: 'Posso tracciare la carta che mi avete spedito? '
example_title: Card Arrival Example - Italian
- text: Can you explain your exchange rate policy to me?
example_title: Exchange Rate Example - English
- text: Potete spiegarmi la vostra politica dei tassi di cambio?
example_title: Exchange Rate Example - Italian
- text: I can't pay by my credit card
example_title: Card Not Working Example - English
- text: Non riesco a pagare con la mia carta di credito
example_title: Card Not Working Example - Italian
base_model: xlm-roberta-base
model-index:
- name: xlm-roberta-base-banking77-classification
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: banking77
type: banking77
config: default
split: train
args: default
metrics:
- type: accuracy
value: 0.9321428571428572
name: Accuracy
- task:
type: text-classification
name: Text Classification
dataset:
name: banking77
type: banking77
config: default
split: test
metrics:
- type: accuracy
value: 0.9321428571428572
name: Accuracy
verified: true
- type: precision
value: 0.9339627666926148
name: Precision Macro
verified: true
- type: precision
value: 0.9321428571428572
name: Precision Micro
verified: true
- type: precision
value: 0.9339627666926148
name: Precision Weighted
verified: true
- type: recall
value: 0.9321428571428572
name: Recall Macro
verified: true
- type: recall
value: 0.9321428571428572
name: Recall Micro
verified: true
- type: recall
value: 0.9321428571428572
name: Recall Weighted
verified: true
- type: f1
value: 0.9320514513719953
name: F1 Macro
verified: true
- type: f1
value: 0.9321428571428572
name: F1 Micro
verified: true
- type: f1
value: 0.9320514513719956
name: F1 Weighted
verified: true
- type: loss
value: 0.30337899923324585
name: loss
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-banking77-classification
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3034
- Accuracy: 0.9321
- F1 Score: 0.9321
## Model description
Experiment on a cross-language model to assess how accurate the classification is by using for fine tuning an English dataset but later querying the model in Italian.
## Intended uses & limitations
The model can be used on text classification. In particular is fine tuned on banking domain for multilingual task.
## Training and evaluation data
The dataset used is [banking77](https://huggingface.co/datasets/banking77)
The 77 labels are:
|label|intent|
|:---:|:----:|
|0|activate_my_card|
|1|age_limit|
|2|apple_pay_or_google_pay|
|3|atm_support|
|4|automatic_top_up|
|5|balance_not_updated_after_bank_transfer|
|6|balance_not_updated_after_cheque_or_cash_deposit|
|7|beneficiary_not_allowed|
|8|cancel_transfer|
|9|card_about_to_expire|
|10|card_acceptance|
|11|card_arrival|
|12|card_delivery_estimate|
|13|card_linking|
|14|card_not_working|
|15|card_payment_fee_charged|
|16|card_payment_not_recognised|
|17|card_payment_wrong_exchange_rate|
|18|card_swallowed|
|19|cash_withdrawal_charge|
|20|cash_withdrawal_not_recognised|
|21|change_pin|
|22|compromised_card|
|23|contactless_not_working|
|24|country_support|
|25|declined_card_payment|
|26|declined_cash_withdrawal|
|27|declined_transfer|
|28|direct_debit_payment_not_recognised|
|29|disposable_card_limits|
|30|edit_personal_details|
|31|exchange_charge|
|32|exchange_rate|
|33|exchange_via_app|
|34|extra_charge_on_statement|
|35|failed_transfer|
|36|fiat_currency_support|
|37|get_disposable_virtual_card|
|38|get_physical_card|
|39|getting_spare_card|
|40|getting_virtual_card|
|41|lost_or_stolen_card|
|42|lost_or_stolen_phone|
|43|order_physical_card|
|44|passcode_forgotten|
|45|pending_card_payment|
|46|pending_cash_withdrawal|
|47|pending_top_up|
|48|pending_transfer|
|49|pin_blocked|
|50|receiving_money|
|51|Refund_not_showing_up|
|52|request_refund|
|53|reverted_card_payment?|
|54|supported_cards_and_currencies|
|55|terminate_account|
|56|top_up_by_bank_transfer_charge|
|57|top_up_by_card_charge|
|58|top_up_by_cash_or_cheque|
|59|top_up_failed|
|60|top_up_limits|
|61|top_up_reverted|
|62|topping_up_by_card|
|63|transaction_charged_twice|
|64|transfer_fee_charged|
|65|transfer_into_account|
|66|transfer_not_received_by_recipient|
|67|transfer_timing|
|68|unable_to_verify_identity|
|69|verify_my_identity|
|70|verify_source_of_funds|
|71|verify_top_up|
|72|virtual_card_not_working|
|73|visa_or_mastercard|
|74|why_verify_identity|
|75|wrong_amount_of_cash_received|
|76|wrong_exchange_rate_for_cash_withdrawal|
## Training procedure
```
from transformers import pipeline
pipe = pipeline("text-classification", model="nickprock/xlm-roberta-base-banking77-classification")
pipe("Non riesco a pagare con la carta di credito")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 3.8002 | 1.0 | 157 | 2.7771 | 0.5159 | 0.4483 |
| 2.4006 | 2.0 | 314 | 1.6937 | 0.7140 | 0.6720 |
| 1.4633 | 3.0 | 471 | 1.0385 | 0.8308 | 0.8153 |
| 0.9234 | 4.0 | 628 | 0.7008 | 0.8789 | 0.8761 |
| 0.6163 | 5.0 | 785 | 0.5029 | 0.9068 | 0.9063 |
| 0.4282 | 6.0 | 942 | 0.4084 | 0.9123 | 0.9125 |
| 0.3203 | 7.0 | 1099 | 0.3515 | 0.9253 | 0.9253 |
| 0.245 | 8.0 | 1256 | 0.3295 | 0.9227 | 0.9225 |
| 0.1863 | 9.0 | 1413 | 0.3092 | 0.9269 | 0.9269 |
| 0.1518 | 10.0 | 1570 | 0.2901 | 0.9338 | 0.9338 |
| 0.1179 | 11.0 | 1727 | 0.2938 | 0.9318 | 0.9319 |
| 0.0969 | 12.0 | 1884 | 0.2906 | 0.9328 | 0.9328 |
| 0.0805 | 13.0 | 2041 | 0.2963 | 0.9295 | 0.9295 |
| 0.063 | 14.0 | 2198 | 0.2998 | 0.9289 | 0.9288 |
| 0.0554 | 15.0 | 2355 | 0.2933 | 0.9351 | 0.9349 |
| 0.046 | 16.0 | 2512 | 0.2960 | 0.9328 | 0.9326 |
| 0.04 | 17.0 | 2669 | 0.3032 | 0.9318 | 0.9318 |
| 0.035 | 18.0 | 2826 | 0.3061 | 0.9312 | 0.9312 |
| 0.0317 | 19.0 | 2983 | 0.3030 | 0.9331 | 0.9330 |
| 0.0315 | 20.0 | 3140 | 0.3034 | 0.9321 | 0.9321 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Vicky0522/RSFNet-models | Vicky0522 | 2023-09-18T08:28:16Z | 0 | 0 | null | [
"arxiv:2303.08682",
"region:us"
]
| null | 2023-09-12T14:21:04Z | Pretrained models for RSFNet
Paper: https://arxiv.org/abs/2303.08682
Code: https://github.com/Vicky0522/RSFNet
If our work is helpful for your research, please consider citing:
```
@article{oywq2023rsfnet,
title={RSFNet: A white-Box image retouching approach using region-specific color filters},
author={Wenqi Ouyang and Yi Dong and Xiaoyang Kang and Peiran Ren and Xin Xu and Xuansong Xie},
journal={https://arxiv.org/abs/2303.08682},
year={2023}
}
``` |
CyberHarem/mukai_takumi_idolmastercinderellagirls | CyberHarem | 2023-09-18T08:27:50Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/mukai_takumi_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-18T08:03:55Z | ---
license: mit
datasets:
- CyberHarem/mukai_takumi_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of mukai_takumi_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6480, you need to download `6480/mukai_takumi_idolmastercinderellagirls.pt` as the embedding and `6480/mukai_takumi_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6480**, with the score of 0.806. The trigger words are:
1. `mukai_takumi_idolmastercinderellagirls`
2. `long_hair, breasts, blush, black_hair, large_breasts, brown_hair, cleavage, bangs, collarbone, green_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.797 | [Download](8100/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](8100/previews/pattern_3.png) |  |  | [<NSFW, click to see>](8100/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.783 | [Download](7560/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](7560/previews/pattern_3.png) |  |  | [<NSFW, click to see>](7560/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.736 | [Download](7020/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](7020/previews/pattern_3.png) |  |  | [<NSFW, click to see>](7020/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| **6480** | **0.806** | [**Download**](6480/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](6480/previews/pattern_3.png) |  |  | [<NSFW, click to see>](6480/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.766 | [Download](5940/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](5940/previews/pattern_3.png) |  |  | [<NSFW, click to see>](5940/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5400 | 0.792 | [Download](5400/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](5400/previews/pattern_3.png) |  |  | [<NSFW, click to see>](5400/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.794 | [Download](4860/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](4860/previews/pattern_3.png) |  |  | [<NSFW, click to see>](4860/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| 4320 | 0.747 | [Download](4320/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](4320/previews/pattern_3.png) |  |  | [<NSFW, click to see>](4320/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.761 | [Download](3780/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](3780/previews/pattern_3.png) |  |  | [<NSFW, click to see>](3780/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3240 | 0.769 | [Download](3240/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](3240/previews/pattern_3.png) |  |  | [<NSFW, click to see>](3240/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.693 | [Download](2700/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](2700/previews/pattern_3.png) |  |  | [<NSFW, click to see>](2700/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.810 | [Download](2160/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](2160/previews/pattern_3.png) |  |  | [<NSFW, click to see>](2160/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.718 | [Download](1620/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](1620/previews/pattern_3.png) |  |  | [<NSFW, click to see>](1620/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.703 | [Download](1080/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](1080/previews/pattern_3.png) |  |  | [<NSFW, click to see>](1080/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.610 | [Download](540/mukai_takumi_idolmastercinderellagirls.zip) |  |  | [<NSFW, click to see>](540/previews/pattern_3.png) |  |  | [<NSFW, click to see>](540/previews/pattern_6.png) |  |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
BAAI/Aquila-7B | BAAI | 2023-09-18T08:26:37Z | 1,824 | 17 | transformers | [
"transformers",
"pytorch",
"aquila",
"custom_code",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2023-06-08T07:25:29Z | ---
license: other
---

<h4 align="center">
<p>
<b>English</b> |
<a href="https://huggingface.co/BAAI/Aquila-7B/blob/main/README_zh.md">简体中文</a> |
<p>
</h4>
Aquila Language Model is the first open source language model that supports both Chinese and English knowledge, commercial license agreements, and compliance with domestic data regulations.
- 🌟 **Supports open source commercial licenses**. The source code of the Aquila series models is based on the [Apache 2.0 agreement](https://www.apache.org/licenses/LICENSE-2.0), while the model weight is based on the [BAAI Aquila Model License Agreement](https://huggingface.co/BAAI/Aquila-7B/resolve/main/BAAI%20Aquila%20Model%20License%20Agreement.pdf). Users can use it for commercial purposes as long as they meet the licensing restrictions.
- ✍️ **Possesses Chinese and English knowledge**. The Aquila series model is trained from scratch on a high-quality corpus of Chinese and English languages, with Chinese corpora accounting for about 40%, ensuring that the model accumulates native Chinese world knowledge during the pre-training phase, rather than translated knowledge.
- 👮♀️ **Complies with domestic data regulations**. The Chinese corpora of the Aquila series models come from Intelligence Source's accumulated Chinese datasets over the years, including Chinese internet data from over 10,000 sources (more than 99% of which are domestic sources), as well as high-quality Chinese literature and book data supported by authoritative domestic organizations. We will continue to accumulate high-quality and diverse datasets and incorporate them into the subsequent training of the Aquila base models.
- 🎯 **Continuous improvements and open sourcing**. We will continue to improve training data, optimize training methods, and enhance model performance, cultivate a flourishing "model tree" on a better base model foundation, and continuously update open-source versions.
The additional details of the Aquila model will be presented in the official technical report. Please stay tuned for updates on official channels, including the [FlagAI GitHub repository](https://github.com/FlagAI-Open/FlagAI/), [FlagAI's Zhihu account](https://www.zhihu.com/people/95-22-20-18) and [FlagAI's official technical communication group](https://github.com/FlagAI-Open/FlagAI/blob/master/wechat-qrcode.jpg).
| Model | Model Type | Description | Status | GPUs Used |
| :----------------- | :----------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :--------------| :----------- |
| Aquila-7B | Base model, 7 billion parameters | **Aquila Base Model** inherits the architectural design advantages of GPT-3 and LLaMA. It replaces a batch of more efficient underlying operator implementations, redesigns the implementation of bilingual tokenizer, upgrades BMTrain parallel training method, and achieves nearly 8 times the training efficiency of Magtron+DeepSpeed ZeRO-2. | Released | Nvidia-A100 |
| Aquila-33B | Base model, 33 billion parameters | Same as above | Coming soon | Nvidia-A100 |
| AquilaChat-7B | SFT model, fine-tuned and RL based on Aquila-7B | **AquilaChat Dialog Model** supports fluent text dialogue and multiple language generation tasks, and realizes the call of AquilaChat to other models and tools by defining an expandable special instruction specification, which is easy to extend. For example, calling the open source **[AltDiffusion](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltDiffusion-m18) multimodal language image generation model** of Flagship Intelligence achieved smooth image generation capability. Together with Flagship Intelligence's **InstructFace multi-step controllable text-picture model**, it is easy to achieve multi-step controllable editing of human face images. | Released | Nvidia-A100 |
| AquilaChat-33B | SFT model, fine-tuned and RL based on Aquila-33B | Same as above | Coming soon | Nvidia-A100 |
| AquilaCode-7B-NV | Base model, "text-code" generation model, further pre-trained based on Aquila-7B, trained on Nvidia | AquilaCode-7B achieves high performance with small data sets and parameters, and is currently the best open source code model that supports both Chinese and English, trained using training code data with compliant open source licenses after high-quality filtering. AquilaCode-7B has been trained on both Nvidia and domestic chips for code models. | Released on GitHub | Nvidia-A100 |
| AquilaCode-7B-TS | Base model, "text-code" generation model, further pre-trained based on Aquila-7B, trained on Horizon Robotics chips | Same as above | Released on GitHub | Tianshu-BI-V100 |
We will continue to release improved versions of Aquila model as open source.
- 2023/08/15 :release v0.10
- Aquila-7B-01 md5: 4279db72e68df1a0705ecc8d4c7be3db
- Aquila-7B-02 md5: 621f8ce4c8deebe1635f5a09aa4b80f2
- AquilaChat-7B-01 md5: 22b22ffaed51388ce23f8e328a9b6a18
- AquilaChat-7B-02 md5: 6e84423fe2837c79c0ced6817c316bd4
Aquila-7B has shown improvements in the FlagEval large model evaluation ("Objective") compared to last version. It achieved improvements of approximately 9.09% on TruthfulQA datasets. For detailed evaluation results, please refer to the website http://flageval.baai.ac.cn.
For detailed version change history, see [Change Log](https://huggingface.co/BAAI/Aquila-7B/blob/main/change_log.log).
## Quick Start Aquila-7B
### 1. Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_info = "BAAI/Aquila-7B"
tokenizer = AutoTokenizer.from_pretrained(model_info, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_info, trust_remote_code=True)
model.eval()
model.to("cuda:0")
text = "汽车EDR是什么"
tokens = tokenizer.encode_plus(text)['input_ids'][:-1]
tokens = torch.tensor(tokens)[None,].to("cuda:0")
with torch.no_grad():
out = model.generate(tokens, do_sample=True, max_length=512, eos_token_id=100007)[0]
out = tokenizer.decode(out.cpu().numpy().tolist())
print(out)
```
## License
Aquila-7B and AquilaChat-33B open-source model is licensed under [ BAAI Aquila Model Licence Agreement](https://huggingface.co/BAAI/Aquila-7B/resolve/main/BAAI%20Aquila%20Model%20License%20Agreement.pdf) |
bardsai/twitter-emotion-pl-base | bardsai | 2023-09-18T08:23:29Z | 898 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"twitter",
"pl",
"dataset:datasets/tweet_eval",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-19T10:55:07Z | ---
language: pl
tags:
- text-classification
- twitter
datasets:
- datasets/tweet_eval
metrics:
- f1
- accuracy
- precision
- recall
widget:
- text: "Nigdy przegrana nie sprawiła mi takiej radości. Szczęście i Opatrzność mają znaczenie Gratuluje @pzpn_pl"
example_title: "Example 1"
- text: "Osoby z Ukrainy zapłacą za życie w centrach pomocy? Sprzeczne prawem UE, niehumanitarne, okrutne."
example_title: "Example 2"
---
# Twitter emotion PL (base)
Twitter emotion PL (base) is a model based on [herbert-base](https://huggingface.co/allegro/herbert-base-cased) for analyzing emotion of Polish twitter posts. It was trained on the translated version of [TweetEval](https://www.researchgate.net/publication/347233661_TweetEval_Unified_Benchmark_and_Comparative_Evaluation_for_Tweet_Classification) by Barbieri et al., 2020 for 10 epochs on single RTX3090 gpu.
The model will give you a four labels: joy, optimism, sadness and anger.
## How to use
You can use this model directly with a pipeline for text classification:
```python
from transformers import pipeline
nlp = pipeline("text-classification", model="bardsai/twitter-emotion-pl-base")
nlp("Nigdy przegrana nie sprawiła mi takiej radości. Szczęście i Opatrzność mają znaczenie Gratuluje @pzpn_pl")
```
```bash
[{'label': 'joy', 'score': 0.5163766145706177}]
```
## Performance
| Metric | Value |
| --- | ----------- |
| f1 macro | 0.756 |
| precision macro | 0.767 |
| recall macro | 0.750 |
| accuracy | 0.789 |
| samples per second | 131.6 |
(The performance was evaluated on RTX 3090 gpu)
## Changelog
- 2023-07-19: Initial release
## About bards.ai
At bards.ai, we focus on providing machine learning expertise and skills to our partners, particularly in the areas of nlp, machine vision and time series analysis. Our team is located in Wroclaw, Poland. Please visit our website for more information: [bards.ai](https://bards.ai/)
Let us know if you use our model :). Also, if you need any help, feel free to contact us at [email protected]
|
octava/audio_classification | octava | 2023-09-18T08:23:27Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-09-11T13:23:00Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: audio_classification
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.09734513274336283
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio_classification
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6471
- Accuracy: 0.0973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 15 | 2.6423 | 0.0531 |
| No log | 2.0 | 30 | 2.6471 | 0.0973 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
AlexanderBond/distilbert-base-uncased-finetuned-emotion | AlexanderBond | 2023-09-18T08:12:36Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T06:01:24Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9218912616592688
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2165
- Accuracy: 0.922
- F1: 0.9219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8214 | 1.0 | 250 | 0.3159 | 0.909 | 0.9085 |
| 0.2497 | 2.0 | 500 | 0.2165 | 0.922 | 0.9219 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
vdivya/dummy-model | vdivya | 2023-09-18T08:09:13Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T07:57:50Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0608
- Train Accuracy: 0.9804
- Validation Loss: 0.2496
- Validation Accuracy: 0.9140
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 25257, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2262 | 0.9143 | 0.2503 | 0.9094 | 0 |
| 0.1133 | 0.9622 | 0.2515 | 0.9083 | 1 |
| 0.0608 | 0.9804 | 0.2496 | 0.9140 | 2 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kurileo/blip2-opt-2.7b-refines | kurileo | 2023-09-18T08:03:37Z | 2 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T08:02:34Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
ernestum/sac-seals-Ant-v1 | ernestum | 2023-09-18T07:54:23Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Ant-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T11:54:00Z | ---
library_name: stable-baselines3
tags:
- seals/Ant-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Ant-v1
type: seals/Ant-v1
metrics:
- type: mean_reward
value: 1004.15 +/- 26.60
name: mean_reward
verified: false
---
# **SAC** Agent playing **seals/Ant-v1**
This is a trained model of a **SAC** agent playing **seals/Ant-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env seals/Ant-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Ant-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo sac --env seals/Ant-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Ant-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo sac --env seals/Ant-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env seals/Ant-v1 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('buffer_size', 1000000),
('gamma', 0.98),
('learning_rate', 0.0018514039303149058),
('learning_starts', 1000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'log_std_init': -2.2692589009754176,
'net_arch': [256, 256],
'use_sde': False}),
('tau', 0.05),
('train_freq', 64),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ernestum/sac-seals-HalfCheetah-v1 | ernestum | 2023-09-18T07:53:35Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/HalfCheetah-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T11:53:34Z | ---
library_name: stable-baselines3
tags:
- seals/HalfCheetah-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/HalfCheetah-v1
type: seals/HalfCheetah-v1
metrics:
- type: mean_reward
value: 1183.52 +/- 22.65
name: mean_reward
verified: false
---
# **SAC** Agent playing **seals/HalfCheetah-v1**
This is a trained model of a **SAC** agent playing **seals/HalfCheetah-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env seals/HalfCheetah-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/HalfCheetah-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo sac --env seals/HalfCheetah-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/HalfCheetah-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo sac --env seals/HalfCheetah-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env seals/HalfCheetah-v1 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 2048),
('buffer_size', 100000),
('gamma', 0.95),
('learning_rate', 0.000884624878315995),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'log_std_init': -0.6932709443503001,
'net_arch': [64, 64],
'use_sde': False}),
('tau', 0.01),
('train_freq', 64),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ernestum/sac-seals-Hopper-v1 | ernestum | 2023-09-18T07:52:51Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Hopper-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T11:53:03Z | ---
library_name: stable-baselines3
tags:
- seals/Hopper-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Hopper-v1
type: seals/Hopper-v1
metrics:
- type: mean_reward
value: 2279.30 +/- 124.09
name: mean_reward
verified: false
---
# **SAC** Agent playing **seals/Hopper-v1**
This is a trained model of a **SAC** agent playing **seals/Hopper-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env seals/Hopper-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Hopper-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo sac --env seals/Hopper-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo sac --env seals/Hopper-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo sac --env seals/Hopper-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env seals/Hopper-v1 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 100000),
('gamma', 0.98),
('learning_rate', 0.001709807687567946),
('learning_starts', 1000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'log_std_init': -1.6829391077276037,
'net_arch': [256, 256],
'use_sde': False}),
('tau', 0.08),
('train_freq', 32),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
nchen909/codellama-7b-python-sft-v1.1 | nchen909 | 2023-09-18T07:52:00Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-15T09:27:02Z | Evol-Instruct-Python
---
license: cc
---
|
ernestum/ppo-seals-Walker2d-v1 | ernestum | 2023-09-18T07:48:56Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Walker2d-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T11:51:52Z | ---
library_name: stable-baselines3
tags:
- seals/Walker2d-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Walker2d-v1
type: seals/Walker2d-v1
metrics:
- type: mean_reward
value: 2465.56 +/- 272.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/Walker2d-v1**
This is a trained model of a **PPO** agent playing **seals/Walker2d-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Walker2d-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Walker2d-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Walker2d-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Walker2d-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/Walker2d-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/Walker2d-v1 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 8),
('clip_range', 0.4),
('ent_coef', 0.00013057334805552262),
('gae_lambda', 0.92),
('gamma', 0.98),
('learning_rate', 3.791707778339674e-05),
('max_grad_norm', 0.6),
('n_envs', 1),
('n_epochs', 5),
('n_steps', 2048),
('n_timesteps', 1000000.0),
('normalize',
{'gamma': 0.98, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.ReLU'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [256, 256], 'vf': [256, 256]}]}),
('vf_coef', 0.6167177795726859),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.98,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ernestum/ppo-seals-Humanoid-v1 | ernestum | 2023-09-18T07:47:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Humanoid-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-18T07:46:45Z | ---
library_name: stable-baselines3
tags:
- seals/Humanoid-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Humanoid-v1
type: seals/Humanoid-v1
metrics:
- type: mean_reward
value: 3224.12 +/- 925.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/Humanoid-v1**
This is a trained model of a **PPO** agent playing **seals/Humanoid-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Humanoid-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Humanoid-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Humanoid-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Humanoid-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/Humanoid-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/Humanoid-v1 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 0.2),
('ent_coef', 2.0745206045994986e-05),
('gae_lambda', 0.92),
('gamma', 0.999),
('learning_rate', 2.0309225666232827e-05),
('max_grad_norm', 0.5),
('n_envs', 1),
('n_epochs', 20),
('n_steps', 2048),
('n_timesteps', 10000000.0),
('normalize',
{'gamma': 0.999, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.ReLU'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [256, 256], 'vf': [256, 256]}]}),
('vf_coef', 0.819262464558427),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.999,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ernestum/ppo-seals-Swimmer-v1 | ernestum | 2023-09-18T07:45:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/Swimmer-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T11:50:49Z | ---
library_name: stable-baselines3
tags:
- seals/Swimmer-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/Swimmer-v1
type: seals/Swimmer-v1
metrics:
- type: mean_reward
value: 292.84 +/- 3.69
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/Swimmer-v1**
This is a trained model of a **PPO** agent playing **seals/Swimmer-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Swimmer-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Swimmer-v1 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/Swimmer-v1 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/Swimmer-v1 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/Swimmer-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/Swimmer-v1 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 8),
('clip_range', 0.1),
('ent_coef', 5.167107294612664e-08),
('gae_lambda', 0.95),
('gamma', 0.999),
('learning_rate', 0.0001214437022727675),
('max_grad_norm', 2),
('n_epochs', 20),
('n_steps', 2048),
('n_timesteps', 1000000.0),
('normalize',
{'gamma': 0.999, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.Tanh'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [64, 64], 'vf': [64, 64]}]}),
('vf_coef', 0.6162112311062333),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.999,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ernestum/ppo-seals-MountainCar-v0 | ernestum | 2023-09-18T07:43:55Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"seals/MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T11:50:03Z | ---
library_name: stable-baselines3
tags:
- seals/MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/MountainCar-v0
type: seals/MountainCar-v0
metrics:
- type: mean_reward
value: -97.00 +/- 8.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **seals/MountainCar-v0**
This is a trained model of a **PPO** agent playing **seals/MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/MountainCar-v0 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/MountainCar-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/MountainCar-v0 -orga ernestum -f logs/
python -m rl_zoo3.enjoy --algo ppo --env seals/MountainCar-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env seals/MountainCar-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/MountainCar-v0 -f logs/ -orga ernestum
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('clip_range', 0.2),
('ent_coef', 6.4940755116195606e-06),
('gae_lambda', 0.98),
('gamma', 0.99),
('learning_rate', 0.0004476103728105138),
('max_grad_norm', 1),
('n_envs', 16),
('n_epochs', 20),
('n_steps', 256),
('n_timesteps', 1000000.0),
('normalize',
{'gamma': 0.99, 'norm_obs': False, 'norm_reward': True}),
('policy', 'MlpPolicy'),
('policy_kwargs',
{'activation_fn': <class 'torch.nn.modules.activation.Tanh'>,
'features_extractor_class': <class 'imitation.policies.base.NormalizeFeaturesExtractor'>,
'net_arch': [{'pi': [64, 64], 'vf': [64, 64]}]}),
('vf_coef', 0.25988158989488963),
('normalize_kwargs',
{'norm_obs': {'gamma': 0.99,
'norm_obs': False,
'norm_reward': True},
'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
checkiejan/prefix-paraphase-30-20-auto | checkiejan | 2023-09-18T07:28:51Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T07:28:49Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
lepin2001/catsordogs | lepin2001 | 2023-09-18T07:23:07Z | 0 | 0 | fastai | [
"fastai",
"code",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2023-09-18T07:19:04Z | ---
license: apache-2.0
language:
- en
library_name: fastai
tags:
- code
--- |
kming/unispeech-sat-base-plus-sv-finetuned-ami-ten-percent-train | kming | 2023-09-18T07:21:10Z | 159 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"audio-xvector",
"generated_from_trainer",
"dataset:edinburghcstr/ami",
"base_model:microsoft/unispeech-sat-base-plus-sv",
"base_model:finetune:microsoft/unispeech-sat-base-plus-sv",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-18T07:11:54Z | ---
base_model: microsoft/unispeech-sat-base-plus-sv
tags:
- generated_from_trainer
datasets:
- edinburghcstr/ami
model-index:
- name: unispeech-sat-base-plus-sv-finetuned-ami-ten-percent-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unispeech-sat-base-plus-sv-finetuned-ami-ten-percent-train
This model is a fine-tuned version of [microsoft/unispeech-sat-base-plus-sv](https://huggingface.co/microsoft/unispeech-sat-base-plus-sv) on the ami dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Archfiend/ardic-ai-sd-fdb | Archfiend | 2023-09-18T07:17:21Z | 17 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-21T20:20:03Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ardic-ai-sd-fdb Dreambooth model trained by Archfiend
Sample pictures of this concept:
|
marcelsamyn/lora-trained-xl-folder | marcelsamyn | 2023-09-18T07:16:10Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:marcelsamyn/marcelsamyn3",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-18T06:27:31Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: marcelsamyn
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: false
datasets:
- marcelsamyn/marcelsamyn3
---
# LoRA DreamBooth - marcelsamyn/lora-trained-xl-folder
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained on the concept prompt:
`marcelsamyn`
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
# This is where you load your trained weights
pipe.load_lora_weights('marcelsamyn/lora-trained-xl-folder')
pipe.to("cuda")
prompt = "A majestic marcelsamyn jumping from a big stone at night"
image = pipe(prompt=prompt, num_inference_steps=50).images[0]
```
|
warp-ai/wuerstchen-prior-model-base | warp-ai | 2023-09-18T07:02:05Z | 24 | 1 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2306.00637",
"arxiv:1910.09700",
"license:mit",
"region:us"
]
| null | 2023-09-03T19:39:26Z | ---
license: mit
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/i-DYpDHw8Pwiy7QBKZVR5.jpeg" width=1500>
## Würstchen - Overview
Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce
computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make
use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial
compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a
two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://arxiv.org/abs/2306.00637)).
A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, allowing
also cheaper and faster inference.
## Würstchen - Prior
The Prior is what we refer to as "Stage C". It is the text-conditional model, operating in the small latent space that Stage A and Stage B encode images into. During
inference, its job is to generate the image latents given text. These image latents are then sent to Stages A & B to decode the latents into pixel space.
### Prior - Model - Base
This is the base checkpoint for the Prior (Stage C). This means this is only pretrained and generates mostly standard images. We recommend using the [interpolated model](https://huggingface.co/warp-ai/wuerstchen-prior-model-interpolated),
as this is our best checkpoint for the Prior (Stage C) because it was finetuned on a curated dataset. However, we recommend this checkpoint if you want to finetune Würstchen
on your own large dataset, as the other checkpoints are already biased towards being more artistic. This checkpoint should provide a fairly standard baseline to finetune
from, as long as your dataset is rather large.
**Note:** This checkpoint was also already trained on multi-aspect-ratios, meaning you can generate larger images than just 1024x1024. Sometimes generations up to 2048x2048
even work. Feel free to try it out!
**Also Note:** The base checkpoint usually requires a higher classifier-free-guidance value (`guidance_scale=8.0`) and also a negative caption in order to make good
looking images. The [interpolated model](https://huggingface.co/warp-ai/wuerstchen-prior-model-interpolated) and [finetuned model](https://huggingface.co/warp-ai/wuerstchen-prior-model-finetuned)
usually don't need a negative caption and work better with a lower classifier-free-guidance value (`guidance_scale=4.0`).
### Image Sizes
Würstchen was trained on image resolutions between 1024x1024 & 1536x1536. We sometimes also observe good outputs at resolutions like 1024x2048. Feel free to try it out.
We also observed that the Prior (Stage C) adapts extremely fast to new resolutions. So finetuning it at 2048x2048 should be computationally cheap.
<img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/IfVsUDcP15OY-5wyLYKnQ.jpeg" width=1000>
## How to run
This pipeline should be run together with https://huggingface.co/warp-ai/wuerstchen:
```py
import torch
from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline
from diffusers.pipelines.wuerstchen import WuerstchenPrior, default_stage_c_timesteps
device = "cuda"
dtype = torch.float16
num_images_per_prompt = 2
prior = WuerstchenPrior.from_pretrained("warp-ai/wuerstchen-prior-model-base", torch_dtype=dtype).to(device)
prior_pipeline = WuerstchenPriorPipeline.from_pretrained(
"warp-ai/wuerstchen-prior", prior=prior, torch_dtype=dtype
).to(device)
decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained(
"warp-ai/wuerstchen", torch_dtype=dtype
).to(device)
caption = "Anthropomorphic cat dressed as a fire fighter"
negative_prompt = "bad anatomy, blurry, fuzzy, extra arms, extra fingers, poorly drawn hands, disfigured, tiling, deformed, mutated, drawing"
prior_output = prior_pipeline(
prompt=caption,
height=1024,
width=1024,
timesteps=default_stage_c_timesteps,
negative_prompt=negative_prompt,
guidance_scale=8.0,
num_images_per_prompt=num_images_per_prompt,
)
decoder_output = decoder_pipeline(
image_embeddings=prior_output.image_embeddings,
prompt=caption,
negative_prompt=negative_prompt,
num_images_per_prompt=num_images_per_prompt,
guidance_scale=0.0,
output_type="pil",
).images
```
## Model Details
- **Developed by:** Pablo Pernias, Dominic Rampas
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** MIT
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a Diffusion model in the style of Stage C from the [Würstchen paper](https://arxiv.org/abs/2306.00637) that uses a fixed, pretrained text encoder ([CLIP ViT-bigG/14](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
- **Resources for more information:** [GitHub Repository](https://github.com/dome272/Wuerstchen), [Paper](https://arxiv.org/abs/2306.00637).
- **Cite as:**
@misc{pernias2023wuerstchen,
title={Wuerstchen: Efficient Pretraining of Text-to-Image Models},
author={Pablo Pernias and Dominic Rampas and Marc Aubreville},
year={2023},
eprint={2306.00637},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Environmental Impact
**Würstchen v2** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 24602
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 2275.68 kg CO2 eq. |
warp-ai/wuerstchen-prior-model-interpolated | warp-ai | 2023-09-18T07:01:48Z | 23 | 3 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2306.00637",
"arxiv:1910.09700",
"license:mit",
"region:us"
]
| null | 2023-09-03T19:45:43Z | ---
license: mit
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/i-DYpDHw8Pwiy7QBKZVR5.jpeg" width=1500>
## Würstchen - Overview
Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce
computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make
use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial
compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a
two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://arxiv.org/abs/2306.00637)).
A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, allowing
also cheaper and faster inference.
## Würstchen - Prior
The Prior is what we refer to as "Stage C". It is the text-conditional model, operating in the small latent space that Stage A and Stage B encode images into. During
inference, its job is to generate the image latents given text. These image latents are then sent to Stages A & B to decode the latents into pixel space.
### Prior - Model - Interpolated
The interpolated model is our current best Prior (Stage C) checkpoint. It is an interpolation between our [base model](https://huggingface.co/warp-ai/wuerstchen-prior-model-base) and the [finetuned model](https://huggingface.co/warp-ai/wuerstchen-prior-model-finetuned).
We created this interpolation because the finetuned model became too artistic and often only generates artistic images. The base model, however, usually is very photorealistic.
As a result, we combined both by interpolating their weights by 50%, so the middle between the base and finetuned model (`0.5 * base_weights + 0.5 * finetuned_weights`).
You can also interpolate the [base model](https://huggingface.co/warp-ai/wuerstchen-prior-model-base) and the [finetuned model](https://huggingface.co/warp-ai/wuerstchen-prior-model-finetuned)
as you want and maybe find an interpolation that fits your needs better than this checkpoint.
### Image Sizes
Würstchen was trained on image resolutions between 1024x1024 & 1536x1536. We sometimes also observe good outputs at resolutions like 1024x2048. Feel free to try it out.
We also observed that the Prior (Stage C) adapts extremely fast to new resolutions. So finetuning it at 2048x2048 should be computationally cheap.
<img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/IfVsUDcP15OY-5wyLYKnQ.jpeg" width=1000>
## How to run
This pipeline should be run together with https://huggingface.co/warp-ai/wuerstchen:
```py
import torch
from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline
from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS
device = "cuda"
dtype = torch.float16
num_images_per_prompt = 2
prior_pipeline = WuerstchenPriorPipeline.from_pretrained(
"warp-ai/wuerstchen-prior", torch_dtype=dtype
).to(device)
decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained(
"warp-ai/wuerstchen", torch_dtype=dtype
).to(device)
caption = "Anthropomorphic cat dressed as a fire fighter"
negative_prompt = ""
prior_output = prior_pipeline(
prompt=caption,
height=1024,
width=1536,
timesteps=DEFAULT_STAGE_C_TIMESTEPS,
negative_prompt=negative_prompt,
guidance_scale=4.0,
num_images_per_prompt=num_images_per_prompt,
)
decoder_output = decoder_pipeline(
image_embeddings=prior_output.image_embeddings,
prompt=caption,
negative_prompt=negative_prompt,
guidance_scale=0.0,
output_type="pil",
).images
```
## Model Details
- **Developed by:** Pablo Pernias, Dominic Rampas
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** MIT
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a Diffusion model in the style of Stage C from the [Würstchen paper](https://arxiv.org/abs/2306.00637) that uses a fixed, pretrained text encoder ([CLIP ViT-bigG/14](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
- **Resources for more information:** [GitHub Repository](https://github.com/dome272/Wuerstchen), [Paper](https://arxiv.org/abs/2306.00637).
- **Cite as:**
@misc{pernias2023wuerstchen,
title={Wuerstchen: Efficient Pretraining of Text-to-Image Models},
author={Pablo Pernias and Dominic Rampas and Marc Aubreville},
year={2023},
eprint={2306.00637},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## Environmental Impact
**Würstchen v2** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 24602
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 2275.68 kg CO2 eq.
|
Abhay1212/news_demo | Abhay1212 | 2023-09-18T06:57:11Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T06:52:21Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
ailoveydovey/anyqngmxrl | ailoveydovey | 2023-09-18T06:54:06Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-18T06:39:38Z | ---
license: creativeml-openrail-m
---
|
etri-xainlp/polyglot-ko-12.8b-instruct | etri-xainlp | 2023-09-18T06:40:24Z | 2,274 | 2 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-12T07:48:37Z | ---
license: apache-2.0
language:
- ko
---
# polyglot-ko-12.8b-instruct
This model is a fine-tuned version of [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) on an instruction-following dataset(260k).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- seed: 42
- distributed_type: multi-GPU(A100 80G)
- num_devices: 8
- gradient_accumulation_steps: 64
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Inference
```python
import torch
from transformers import pipeline, AutoModelForCausalLM
MODEL = 'etri-xainlp/polyglot-ko-12.8b-instruct'
model = AutoModelForCausalLM.from_pretrained(
MODEL,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(device=f"cuda", non_blocking=True)
model.eval()
pipe = pipeline(
'text-generation',
model=model,
tokenizer=MODEL,
device=0
)
pipe.model.config.pad_token_id = pipe.model.config.eos_token_id
def ask(x, context='', is_input_full=False):
ans = pipe(
f"### 질문: {x}\n\n### 맥락: {context}\n\n### 답변:" if context else f"### 질문: {x}\n\n### 답변:",
do_sample=True,
max_new_tokens=2048,
temperature=0.9,
top_p=0.9,
return_full_text=False,
eos_token_id=2,
)
return ans[0]['generated_text']
while True:
quit = input('prompt?: ')
if quit == 'q':
break
else:
generation = ask(quit)
print("etri_ai:", generation)
```
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ys7yoo/sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep9_ckpt | ys7yoo | 2023-09-18T06:40:01Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3",
"base_model:finetune:ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T06:06:14Z | ---
base_model: ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep9_ckpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep9_ckpt
This model is a fine-tuned version of [ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3](https://huggingface.co/ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3250
- Mse: 0.3250
- Mae: 0.4166
- R2: 0.8512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 1.2084 | 1.0 | 183 | 0.5071 | 0.5071 | 0.5306 | 0.7678 |
| 0.1515 | 2.0 | 366 | 0.3142 | 0.3142 | 0.4149 | 0.8561 |
| 0.103 | 3.0 | 549 | 0.3284 | 0.3284 | 0.4150 | 0.8496 |
| 0.0779 | 4.0 | 732 | 0.3306 | 0.3306 | 0.4184 | 0.8486 |
| 0.0597 | 5.0 | 915 | 0.3219 | 0.3219 | 0.4098 | 0.8526 |
| 0.0497 | 6.0 | 1098 | 0.3324 | 0.3324 | 0.4175 | 0.8478 |
| 0.0407 | 7.0 | 1281 | 0.3114 | 0.3114 | 0.4119 | 0.8574 |
| 0.0356 | 8.0 | 1464 | 0.3305 | 0.3305 | 0.4199 | 0.8486 |
| 0.0327 | 9.0 | 1647 | 0.3250 | 0.3250 | 0.4166 | 0.8512 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
checkiejan/prefix-paraphase-25-20-auto | checkiejan | 2023-09-18T06:35:16Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T06:35:14Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
Archolic/SDArchitecture | Archolic | 2023-09-18T06:23:40Z | 0 | 0 | null | [
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-18T06:19:56Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Original GitHub Repository
1. Download the weights
- [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference
- [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).* |
kming/wav2vec2-base-superb-sv-finetuned-ami-ten-percent-train-new | kming | 2023-09-18T06:07:31Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-xvector",
"generated_from_trainer",
"dataset:edinburghcstr/ami",
"base_model:anton-l/wav2vec2-base-superb-sv",
"base_model:finetune:anton-l/wav2vec2-base-superb-sv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-15T09:23:26Z | ---
license: apache-2.0
base_model: anton-l/wav2vec2-base-superb-sv
tags:
- generated_from_trainer
datasets:
- edinburghcstr/ami
model-index:
- name: wav2vec2-base-superb-sv-finetuned-ami-ten-percent-train-normalized
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-superb-sv-finetuned-ami-ten-percent-train-normalized
This model is a fine-tuned version of [anton-l/wav2vec2-base-superb-sv](https://huggingface.co/anton-l/wav2vec2-base-superb-sv) on the ami dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
TamerAbdelaziz/distilbert-base-uncased-finetuned-sst2 | TamerAbdelaziz | 2023-09-18T05:56:36Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T05:36:37Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: TamerAbdelaziz/distilbert-base-uncased-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TamerAbdelaziz/distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0592
- Validation Loss: 0.2958
- Train Accuracy: 0.9060
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 12627, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2123 | 0.2546 | 0.9014 | 0 |
| 0.1023 | 0.2641 | 0.8899 | 1 |
| 0.0592 | 0.2958 | 0.9060 | 2 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/mizumoto_yukari_idolmastercinderellagirls | CyberHarem | 2023-09-18T05:50:54Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/mizumoto_yukari_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-18T05:29:09Z | ---
license: mit
datasets:
- CyberHarem/mizumoto_yukari_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of mizumoto_yukari_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7280, you need to download `7280/mizumoto_yukari_idolmastercinderellagirls.pt` as the embedding and `7280/mizumoto_yukari_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7280**, with the score of 0.975. The trigger words are:
1. `mizumoto_yukari_idolmastercinderellagirls`
2. `brown_hair, long_hair, brown_eyes, blush, smile, bangs, open_mouth, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.971 | [Download](7800/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](7800/previews/pattern_5.png) | [<NSFW, click to see>](7800/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](7800/previews/pattern_12.png) | [<NSFW, click to see>](7800/previews/pattern_13.png) |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| **7280** | **0.975** | [**Download**](7280/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](7280/previews/pattern_5.png) | [<NSFW, click to see>](7280/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](7280/previews/pattern_12.png) | [<NSFW, click to see>](7280/previews/pattern_13.png) |  | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.965 | [Download](6760/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](6760/previews/pattern_5.png) | [<NSFW, click to see>](6760/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](6760/previews/pattern_12.png) | [<NSFW, click to see>](6760/previews/pattern_13.png) |  | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.964 | [Download](6240/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](6240/previews/pattern_5.png) | [<NSFW, click to see>](6240/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](6240/previews/pattern_12.png) | [<NSFW, click to see>](6240/previews/pattern_13.png) |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.975 | [Download](5720/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](5720/previews/pattern_5.png) | [<NSFW, click to see>](5720/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](5720/previews/pattern_12.png) | [<NSFW, click to see>](5720/previews/pattern_13.png) |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.972 | [Download](5200/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](5200/previews/pattern_5.png) | [<NSFW, click to see>](5200/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](5200/previews/pattern_12.png) | [<NSFW, click to see>](5200/previews/pattern_13.png) |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.968 | [Download](4680/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4680/previews/pattern_5.png) | [<NSFW, click to see>](4680/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](4680/previews/pattern_12.png) | [<NSFW, click to see>](4680/previews/pattern_13.png) |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.966 | [Download](4160/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4160/previews/pattern_5.png) | [<NSFW, click to see>](4160/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](4160/previews/pattern_12.png) | [<NSFW, click to see>](4160/previews/pattern_13.png) |  | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.969 | [Download](3640/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3640/previews/pattern_5.png) | [<NSFW, click to see>](3640/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](3640/previews/pattern_12.png) | [<NSFW, click to see>](3640/previews/pattern_13.png) |  | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.967 | [Download](3120/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3120/previews/pattern_5.png) | [<NSFW, click to see>](3120/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](3120/previews/pattern_12.png) | [<NSFW, click to see>](3120/previews/pattern_13.png) |  | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.967 | [Download](2600/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2600/previews/pattern_5.png) | [<NSFW, click to see>](2600/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](2600/previews/pattern_12.png) | [<NSFW, click to see>](2600/previews/pattern_13.png) |  | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.960 | [Download](2080/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2080/previews/pattern_5.png) | [<NSFW, click to see>](2080/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](2080/previews/pattern_12.png) | [<NSFW, click to see>](2080/previews/pattern_13.png) |  | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.961 | [Download](1560/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1560/previews/pattern_5.png) | [<NSFW, click to see>](1560/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](1560/previews/pattern_12.png) | [<NSFW, click to see>](1560/previews/pattern_13.png) |  | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.960 | [Download](1040/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1040/previews/pattern_5.png) | [<NSFW, click to see>](1040/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](1040/previews/pattern_12.png) | [<NSFW, click to see>](1040/previews/pattern_13.png) |  | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.958 | [Download](520/mizumoto_yukari_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](520/previews/pattern_5.png) | [<NSFW, click to see>](520/previews/pattern_6.png) |  |  |  |  |  | [<NSFW, click to see>](520/previews/pattern_12.png) | [<NSFW, click to see>](520/previews/pattern_13.png) |  | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
xtrbase/positive-llm | xtrbase | 2023-09-18T05:39:21Z | 3 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T05:38:57Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
GlennQuagmire/ER-MIX | GlennQuagmire | 2023-09-18T05:26:46Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2023-08-12T02:24:36Z | ---
license: other
---
## I own nothing of this model, this is solely for *caching* purposes
Pay a visit to [author](https://space.bilibili.com/49512651) and leave your endorsement!
# GIGGITY |
GlennQuagmire/DisillusionMix3 | GlennQuagmire | 2023-09-18T05:21:26Z | 0 | 3 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-04-26T16:18:36Z | ---
license: creativeml-openrail-m
---
# I own nothing of this model, All credit goes to original author,be sure to endorse his/her models on CivitAI!
[Click Me](https://civitai.com/user/Rerorerorero/models)
---
I start this repo to cache this model **giggity** |
Drello/Test | Drello | 2023-09-18T05:20:03Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
]
| null | 2023-09-18T05:20:03Z | ---
license: bigscience-openrail-m
---
|
IXLFreaKz/GawrGura | IXLFreaKz | 2023-09-18T05:14:24Z | 0 | 0 | null | [
"license:cc-by-nc-nd-4.0",
"region:us"
]
| null | 2023-09-18T05:12:28Z | ---
license: cc-by-nc-nd-4.0
---
|
abhiShek1061/imdb-classification | abhiShek1061 | 2023-09-18T05:14:00Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T04:42:42Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: imdb-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93228
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imdb-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2332
- Accuracy: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2233 | 1.0 | 1563 | 0.2479 | 0.9146 |
| 0.149 | 2.0 | 3126 | 0.2332 | 0.9323 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Panchovix/Synthia-70B-v1.2b-safetensors | Panchovix | 2023-09-18T05:13:13Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-18T03:09:27Z | ---
license: llama2
---
Safetensors conversion of Synthia-70B-v1.2b (https://huggingface.co/migtissera/Synthia-70B-v1.2b). Can be used directly on transformers, or to be used to convert/quant models with exllamav2. |
ailabturkiye/sempatuco | ailabturkiye | 2023-09-18T05:04:01Z | 0 | 0 | null | [
"tr",
"license:openrail",
"region:us"
]
| null | 2023-08-09T13:54:30Z | ---
license: openrail
language:
- tr
--- |
ShivamMangale/XLM-Roberta-base-finetuned-squad-squad-first | ShivamMangale | 2023-09-18T04:51:49Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-18T01:31:54Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-finetuned-squad-squad-first
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-finetuned-squad-squad-first
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
furquan/opt_2_7_b_prompt_tuned_sentiment_analysis | furquan | 2023-09-18T04:51:43Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"feature-extraction",
"text-generation",
"custom_code",
"dataset:SetFit/sst5",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-18T03:45:11Z | ---
datasets:
- SetFit/sst5
pipeline_tag: text-generation
widget:
- text: 'The weather is lovely today! '
--- |
pkduongsu/bert-finetuned-covidqadeepset | pkduongsu | 2023-09-18T04:42:30Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-18T04:22:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- covid_qa_deepset
model-index:
- name: bert-finetuned-covidqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-covidqa
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
alayaran/bodo-pos-gpt2-fine-tune | alayaran | 2023-09-18T04:37:43Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"token-classification",
"br",
"dataset:alayaran/bodo-pos-conll",
"dataset:alayaran/bodo-monolingual-dataset",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-18T03:53:21Z | ---
license: mit
datasets:
- alayaran/bodo-pos-conll
- alayaran/bodo-monolingual-dataset
language:
- br
metrics:
- accuracy
- seqeval
widget:
- text: "बर’फोरा मिथिंगा सिबियारि ।"
example_title: "Example 1"
- text: "गथ’फोर त्रेफिकिं खालामनायाबो भारताव मोनसे गोब्राब जेंना जागासिनो ।"
example_title: "Example 2"
- text: "गोबां बिबांफोरनि सोरकारनि फोरमानजों मदद होजानानै , बिमायारि आरो गथ’ देहाया देहानि मिनिसत्रिफोराव गोनांसिन जाबाय ।"
example_title: "Example 3"
--- |
CyberHarem/yorita_yoshino_idolmastercinderellagirls | CyberHarem | 2023-09-18T04:34:06Z | 0 | 2 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/yorita_yoshino_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-18T04:11:00Z | ---
license: mit
datasets:
- CyberHarem/yorita_yoshino_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yorita_yoshino_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7840, you need to download `7840/yorita_yoshino_idolmastercinderellagirls.pt` as the embedding and `7840/yorita_yoshino_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7840**, with the score of 0.849. The trigger words are:
1. `yorita_yoshino_idolmastercinderellagirls`
2. `brown_eyes, brown_hair, long_hair, bangs, blush, bow, hair_bow, smile, very_long_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8400 | 0.765 | [Download](8400/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](8400/previews/pattern_8.png) | [<NSFW, click to see>](8400/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](8400/previews/bikini.png) | [<NSFW, click to see>](8400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8400/previews/nude.png) | [<NSFW, click to see>](8400/previews/nude2.png) |  |  |
| **7840** | **0.849** | [**Download**](7840/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](7840/previews/pattern_8.png) | [<NSFW, click to see>](7840/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](7840/previews/bikini.png) | [<NSFW, click to see>](7840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7840/previews/nude.png) | [<NSFW, click to see>](7840/previews/nude2.png) |  |  |
| 7280 | 0.824 | [Download](7280/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/pattern_8.png) | [<NSFW, click to see>](7280/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](7280/previews/bikini.png) | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6720 | 0.779 | [Download](6720/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](6720/previews/pattern_8.png) | [<NSFW, click to see>](6720/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](6720/previews/bikini.png) | [<NSFW, click to see>](6720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) |  |  |
| 6160 | 0.773 | [Download](6160/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](6160/previews/pattern_8.png) | [<NSFW, click to see>](6160/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](6160/previews/bikini.png) | [<NSFW, click to see>](6160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6160/previews/nude.png) | [<NSFW, click to see>](6160/previews/nude2.png) |  |  |
| 5600 | 0.808 | [Download](5600/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](5600/previews/pattern_8.png) | [<NSFW, click to see>](5600/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](5600/previews/bikini.png) | [<NSFW, click to see>](5600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5600/previews/nude.png) | [<NSFW, click to see>](5600/previews/nude2.png) |  |  |
| 5040 | 0.822 | [Download](5040/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](5040/previews/pattern_8.png) | [<NSFW, click to see>](5040/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](5040/previews/bikini.png) | [<NSFW, click to see>](5040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5040/previews/nude.png) | [<NSFW, click to see>](5040/previews/nude2.png) |  |  |
| 4480 | 0.750 | [Download](4480/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4480/previews/pattern_8.png) | [<NSFW, click to see>](4480/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](4480/previews/bikini.png) | [<NSFW, click to see>](4480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4480/previews/nude.png) | [<NSFW, click to see>](4480/previews/nude2.png) |  |  |
| 3920 | 0.777 | [Download](3920/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3920/previews/pattern_8.png) | [<NSFW, click to see>](3920/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](3920/previews/bikini.png) | [<NSFW, click to see>](3920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3920/previews/nude.png) | [<NSFW, click to see>](3920/previews/nude2.png) |  |  |
| 3360 | 0.786 | [Download](3360/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3360/previews/pattern_8.png) | [<NSFW, click to see>](3360/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](3360/previews/bikini.png) | [<NSFW, click to see>](3360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| 2800 | 0.810 | [Download](2800/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2800/previews/pattern_8.png) | [<NSFW, click to see>](2800/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](2800/previews/bikini.png) | [<NSFW, click to see>](2800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2800/previews/nude.png) | [<NSFW, click to see>](2800/previews/nude2.png) |  |  |
| 2240 | 0.586 | [Download](2240/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2240/previews/pattern_8.png) | [<NSFW, click to see>](2240/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](2240/previews/bikini.png) | [<NSFW, click to see>](2240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2240/previews/nude.png) | [<NSFW, click to see>](2240/previews/nude2.png) |  |  |
| 1680 | 0.719 | [Download](1680/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1680/previews/pattern_8.png) | [<NSFW, click to see>](1680/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](1680/previews/bikini.png) | [<NSFW, click to see>](1680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1680/previews/nude.png) | [<NSFW, click to see>](1680/previews/nude2.png) |  |  |
| 1120 | 0.677 | [Download](1120/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1120/previews/pattern_8.png) | [<NSFW, click to see>](1120/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](1120/previews/bikini.png) | [<NSFW, click to see>](1120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1120/previews/nude.png) | [<NSFW, click to see>](1120/previews/nude2.png) |  |  |
| 560 | 0.433 | [Download](560/yorita_yoshino_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](560/previews/pattern_8.png) | [<NSFW, click to see>](560/previews/pattern_9.png) |  |  |  |  |  | [<NSFW, click to see>](560/previews/bikini.png) | [<NSFW, click to see>](560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](560/previews/nude.png) | [<NSFW, click to see>](560/previews/nude2.png) |  |  |
|
huyen89/Reinforce-CartPole-v1 | huyen89 | 2023-09-18T04:21:03Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-18T04:20:58Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 223.40 +/- 21.61
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ys7yoo/sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep7_ckpt | ys7yoo | 2023-09-18T04:19:15Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3",
"base_model:finetune:ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T03:57:43Z | ---
base_model: ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep7_ckpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep7_ckpt
This model is a fine-tuned version of [ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3](https://huggingface.co/ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3202
- Mse: 0.3202
- Mae: 0.4109
- R2: 0.8534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 1.0857 | 1.0 | 183 | 0.4208 | 0.4208 | 0.4787 | 0.8073 |
| 0.1397 | 2.0 | 366 | 0.3135 | 0.3135 | 0.4191 | 0.8565 |
| 0.0989 | 3.0 | 549 | 0.3468 | 0.3468 | 0.4261 | 0.8412 |
| 0.0757 | 4.0 | 732 | 0.3006 | 0.3006 | 0.3959 | 0.8623 |
| 0.0601 | 5.0 | 915 | 0.4034 | 0.4034 | 0.4669 | 0.8153 |
| 0.0502 | 6.0 | 1098 | 0.3357 | 0.3357 | 0.4221 | 0.8463 |
| 0.0429 | 7.0 | 1281 | 0.3202 | 0.3202 | 0.4109 | 0.8534 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
m-aliabbas1/erc_question_big_model | m-aliabbas1 | 2023-09-18T04:01:58Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-09-18T04:01:12Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# m-aliabbas1/erc_question_big_model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("m-aliabbas1/erc_question_big_model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
dsmsb/16_combo_webscrap_1709_v2_reduce_others | dsmsb | 2023-09-18T04:00:21Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T01:47:02Z | ---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 16_combo_webscrap_1709_v2_reduce_others
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 16_combo_webscrap_1709_v2_reduce_others
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1501
- Accuracy: 0.9636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 363 | 1.0481 | 0.7263 |
| 1.5287 | 2.0 | 726 | 0.5613 | 0.8655 |
| 0.6856 | 3.0 | 1089 | 0.3666 | 0.9121 |
| 0.6856 | 4.0 | 1452 | 0.2880 | 0.9284 |
| 0.4313 | 5.0 | 1815 | 0.2187 | 0.9464 |
| 0.3097 | 6.0 | 2178 | 0.1992 | 0.9505 |
| 0.2454 | 7.0 | 2541 | 0.1627 | 0.9598 |
| 0.2454 | 8.0 | 2904 | 0.1501 | 0.9636 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
axelit64/image_classification | axelit64 | 2023-09-18T03:56:43Z | 229 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-18T03:07:32Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.575
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3340
- Accuracy: 0.575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.5156 | 0.45 |
| No log | 2.0 | 80 | 1.4200 | 0.4562 |
| No log | 3.0 | 120 | 1.3790 | 0.5 |
| No log | 4.0 | 160 | 1.2859 | 0.525 |
| No log | 5.0 | 200 | 1.2592 | 0.5125 |
| No log | 6.0 | 240 | 1.3145 | 0.55 |
| No log | 7.0 | 280 | 1.3267 | 0.4813 |
| No log | 8.0 | 320 | 1.3288 | 0.5 |
| No log | 9.0 | 360 | 1.3073 | 0.5 |
| No log | 10.0 | 400 | 1.3066 | 0.5188 |
| No log | 11.0 | 440 | 1.2691 | 0.5563 |
| No log | 12.0 | 480 | 1.2809 | 0.5437 |
| 0.876 | 13.0 | 520 | 1.2963 | 0.5625 |
| 0.876 | 14.0 | 560 | 1.2965 | 0.5312 |
| 0.876 | 15.0 | 600 | 1.3542 | 0.5188 |
| 0.876 | 16.0 | 640 | 1.3489 | 0.5125 |
| 0.876 | 17.0 | 680 | 1.3146 | 0.5687 |
| 0.876 | 18.0 | 720 | 1.2442 | 0.575 |
| 0.876 | 19.0 | 760 | 1.3497 | 0.575 |
| 0.876 | 20.0 | 800 | 1.3316 | 0.5437 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
handi88/FastJobs-Visual_Emotions_Analysis | handi88 | 2023-09-18T03:55:02Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:FastJobs/Visual_Emotional_Analysis",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"region:us"
]
| null | 2023-09-18T03:42:58Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- FastJobs/Visual_Emotional_Analysis
metrics:
- accuracy
- precision
- f1
model-index:
- name: emotion_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: FastJobs/Visual_Emotional_Analysis
type: FastJobs/Visual_Emotional_Analysis
config: FastJobs--Visual_Emotional_Analysis
split: train
args: FastJobs--Visual_Emotional_Analysis
metrics:
- name: Accuracy
type: accuracy
value: 0.66875
- name: Precision
type: precision
value: 0.7104119480438352
- name: F1
type: f1
value: 0.6712765732314218
---
# Emotion Classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k)
on the [FastJobs/Visual_Emotional_Analysis](https://huggingface.co/datasets/FastJobs/Visual_Emotional_Analysis) dataset.
In theory, the accuracy for a random guess on this dataset is 0.125 (8 labels and you need to choose one).
It achieves the following results on the evaluation set:
- Loss: 1.0511
- Accuracy: 0.6687
- Precision: 0.7104
- F1: 0.6713
## Model description
The Vision Transformer base version trained on ImageNet-21K released by Google.
Further details can be found on their [repo](https://huggingface.co/google/vit-base-patch16-224-in21k).
## Training and evaluation data
### Data Split
Trained on [FastJobs/Visual_Emotional_Analysis](https://huggingface.co/datasets/FastJobs/Visual_Emotional_Analysis) dataset.
Used a 4:1 ratio for training and development sets and a random seed of 42.
Also used a seed of 42 for batching the data, completely unrelated lol.
### Pre-processing Augmentation
The main pre-processing phase for both training and evaluation includes:
- Bilinear interpolation to resize the image to (224, 224, 3) because it uses ImageNet images to train the original model
- Normalizing images using a mean and standard deviation of [0.5, 0.5, 0.5] just like the original model
Other than the aforementioned pre-processing, the training set was augmented using:
- Random horizontal & vertical flip
- Color jitter
- Random resized crop
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 150
- num_epochs: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|
| 2.079 | 1.0 | 10 | 2.0895 | 0.0563 | 0.0604 | 0.0521 |
| 2.0789 | 2.0 | 20 | 2.0851 | 0.0563 | 0.0602 | 0.0529 |
| 2.0717 | 3.0 | 30 | 2.0773 | 0.0813 | 0.0858 | 0.0783 |
| 2.0613 | 4.0 | 40 | 2.0658 | 0.125 | 0.1997 | 0.1333 |
| 2.0445 | 5.0 | 50 | 2.0483 | 0.1875 | 0.2569 | 0.1934 |
| 2.0176 | 6.0 | 60 | 2.0206 | 0.2313 | 0.2692 | 0.2384 |
| 1.9894 | 7.0 | 70 | 1.9763 | 0.3063 | 0.3033 | 0.2983 |
| 1.9232 | 8.0 | 80 | 1.8912 | 0.3625 | 0.3307 | 0.3194 |
| 1.8256 | 9.0 | 90 | 1.7775 | 0.4062 | 0.3531 | 0.3600 |
| 1.732 | 10.0 | 100 | 1.6580 | 0.4688 | 0.4158 | 0.4133 |
| 1.6406 | 11.0 | 110 | 1.5597 | 0.5 | 0.4358 | 0.4370 |
| 1.5584 | 12.0 | 120 | 1.4855 | 0.5125 | 0.4792 | 0.4784 |
| 1.4898 | 13.0 | 130 | 1.4248 | 0.5437 | 0.5011 | 0.5098 |
| 1.4216 | 14.0 | 140 | 1.3692 | 0.5687 | 0.5255 | 0.5289 |
| 1.3701 | 15.0 | 150 | 1.3158 | 0.5687 | 0.5346 | 0.5360 |
| 1.3438 | 16.0 | 160 | 1.2842 | 0.5437 | 0.5451 | 0.5098 |
| 1.2799 | 17.0 | 170 | 1.2620 | 0.5625 | 0.5169 | 0.5194 |
| 1.2481 | 18.0 | 180 | 1.2321 | 0.5938 | 0.6003 | 0.5811 |
| 1.1993 | 19.0 | 190 | 1.2108 | 0.5687 | 0.5640 | 0.5412 |
| 1.1599 | 20.0 | 200 | 1.1853 | 0.55 | 0.5434 | 0.5259 |
| 1.1087 | 21.0 | 210 | 1.1839 | 0.5563 | 0.5670 | 0.5380 |
| 1.0757 | 22.0 | 220 | 1.1905 | 0.55 | 0.5682 | 0.5308 |
| 0.9985 | 23.0 | 230 | 1.1509 | 0.6375 | 0.6714 | 0.6287 |
| 0.9776 | 24.0 | 240 | 1.1048 | 0.6188 | 0.6222 | 0.6127 |
| 0.9331 | 25.0 | 250 | 1.1196 | 0.6125 | 0.6345 | 0.6072 |
| 0.8887 | 26.0 | 260 | 1.1424 | 0.5938 | 0.6174 | 0.5867 |
| 0.879 | 27.0 | 270 | 1.1232 | 0.6062 | 0.6342 | 0.5978 |
| 0.8369 | 28.0 | 280 | 1.1172 | 0.6 | 0.6480 | 0.5865 |
| 0.7864 | 29.0 | 290 | 1.1285 | 0.5938 | 0.6819 | 0.5763 |
| 0.7775 | 30.0 | 300 | 1.0511 | 0.6687 | 0.7104 | 0.6713 |
| 0.7281 | 31.0 | 310 | 1.0295 | 0.6562 | 0.6596 | 0.6514 |
| 0.7348 | 32.0 | 320 | 1.0398 | 0.6375 | 0.6353 | 0.6319 |
| 0.6896 | 33.0 | 330 | 1.0729 | 0.6062 | 0.6205 | 0.6062 |
| 0.613 | 34.0 | 340 | 1.0505 | 0.6438 | 0.6595 | 0.6421 |
| 0.6034 | 35.0 | 350 | 1.0827 | 0.6375 | 0.6593 | 0.6376 |
| 0.6236 | 36.0 | 360 | 1.1271 | 0.6125 | 0.6238 | 0.6087 |
| 0.5607 | 37.0 | 370 | 1.0985 | 0.6062 | 0.6254 | 0.6015 |
| 0.5835 | 38.0 | 380 | 1.0791 | 0.6375 | 0.6624 | 0.6370 |
| 0.5889 | 39.0 | 390 | 1.1300 | 0.6062 | 0.6529 | 0.6092 |
| 0.5137 | 40.0 | 400 | 1.1062 | 0.625 | 0.6457 | 0.6226 |
| 0.4804 | 41.0 | 410 | 1.1452 | 0.6188 | 0.6403 | 0.6158 |
| 0.4811 | 42.0 | 420 | 1.1271 | 0.6375 | 0.6478 | 0.6347 |
| 0.5179 | 43.0 | 430 | 1.1942 | 0.5875 | 0.6185 | 0.5874 |
| 0.4744 | 44.0 | 440 | 1.1515 | 0.6125 | 0.6329 | 0.6160 |
| 0.4327 | 45.0 | 450 | 1.1321 | 0.6375 | 0.6669 | 0.6412 |
| 0.4565 | 46.0 | 460 | 1.1742 | 0.625 | 0.6478 | 0.6251 |
| 0.4006 | 47.0 | 470 | 1.1675 | 0.6062 | 0.6361 | 0.6079 |
| 0.4541 | 48.0 | 480 | 1.1542 | 0.6125 | 0.6404 | 0.6152 |
| 0.3689 | 49.0 | 490 | 1.2190 | 0.5875 | 0.6134 | 0.5896 |
| 0.3794 | 50.0 | 500 | 1.2002 | 0.6062 | 0.6155 | 0.6005 |
| 0.429 | 51.0 | 510 | 1.2904 | 0.575 | 0.6207 | 0.5849 |
| 0.431 | 52.0 | 520 | 1.2416 | 0.5875 | 0.6028 | 0.5794 |
| 0.3813 | 53.0 | 530 | 1.2073 | 0.6125 | 0.6449 | 0.6142 |
| 0.365 | 54.0 | 540 | 1.2083 | 0.6062 | 0.6454 | 0.6075 |
| 0.3714 | 55.0 | 550 | 1.1627 | 0.6375 | 0.6576 | 0.6390 |
| 0.3393 | 56.0 | 560 | 1.1620 | 0.6438 | 0.6505 | 0.6389 |
| 0.3676 | 57.0 | 570 | 1.1501 | 0.625 | 0.6294 | 0.6258 |
| 0.3371 | 58.0 | 580 | 1.2779 | 0.5875 | 0.6000 | 0.5792 |
| 0.3325 | 59.0 | 590 | 1.2719 | 0.575 | 0.5843 | 0.5651 |
| 0.3509 | 60.0 | 600 | 1.2956 | 0.6 | 0.6422 | 0.6059 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3 |
m-aliabbas1/set_fit_practice | m-aliabbas1 | 2023-09-18T03:49:01Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-09-18T03:48:43Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# m-aliabbas1/set_fit_practice
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("m-aliabbas1/set_fit_practice")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
shaowenchen/chinese-alpaca-2-13b-gguf | shaowenchen | 2023-09-18T03:44:45Z | 100 | 0 | null | [
"gguf",
"meta",
"llama",
"llama-2",
"alpaca",
"alpaca-2",
"chinese",
"text-generation",
"zh",
"license:other",
"region:us"
]
| text-generation | 2023-09-16T23:34:00Z | ---
inference: false
language:
- zh
license: other
model_creator: ziqingyang
model_link: https://huggingface.co/ziqingyang/chinese-alpaca-2-13b
model_name: chinese-alpaca-2-13b
model_type: llama
pipeline_tag: text-generation
quantized_by: shaowenchen
tasks:
- text2text-generation
tags:
- meta
- gguf
- llama
- llama-2
- alpaca
- alpaca-2
- chinese
---
## Provided files
| Name | Quant method | Size |
| -------------------------------- | ------------ | ------- |
| chinese-alpaca-2-13b.Q2_K.gguf | Q2_K | 5.2 GB |
| chinese-alpaca-2-13b.Q3_K.gguf | Q3_K | 6.0 GB |
| chinese-alpaca-2-13b.Q3_K_L.gguf | Q3_K_L | 6.6 GB |
| chinese-alpaca-2-13b.Q3_K_S.gguf | Q3_K_S | 5.4 GB |
| chinese-alpaca-2-13b.Q4_0.gguf | Q4_0 | 7.0 GB |
| chinese-alpaca-2-13b.Q4_1.gguf | Q4_1 | 7.8 GB |
| chinese-alpaca-2-13b.Q4_K.gguf | Q4_K | 7.5 GB |
| chinese-alpaca-2-13b.Q4_K_S.gguf | Q4_K_S | 7.1 GB |
| chinese-alpaca-2-13b.Q5_0.gguf | Q5_0 | 8.5 GB |
| chinese-alpaca-2-13b.Q5_1.gguf | Q5_1 | 9.3 GB |
| chinese-alpaca-2-13b.Q5_K.gguf | Q5_K | 8.8 GB |
| chinese-alpaca-2-13b.Q5_K_S.gguf | Q5_K_S | 8.5 GB |
| chinese-alpaca-2-13b.Q6_K.gguf | Q6_K | 10.0 GB |
| chinese-alpaca-2-13b.Q8_0.gguf | Q8_0 | 13.0 GB |
| chinese-alpaca-2-13b.gguf | full | 25.0 GB |
Usage:
```
docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest
```
and you can view http://localhost:8000/docs to see the swagger UI. |
Chickenfish/Dayte_dreambooth | Chickenfish | 2023-09-18T03:41:11Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-08-22T07:18:04Z | ---
license: creativeml-openrail-m
---
|
nemesis1/chlldrgnrc | nemesis1 | 2023-09-18T03:28:31Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-18T03:28:31Z | ---
license: creativeml-openrail-m
---
|
LarryAIDraw/shana1-000008 | LarryAIDraw | 2023-09-18T03:26:44Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-18T03:21:22Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/47454/shana-or-character-lora-974 |
ys7yoo/sts_roberta-large_lr1e-05_wd1e-03_ep7_ckpt | ys7yoo | 2023-09-18T03:26:33Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:klue/roberta-large",
"base_model:finetune:klue/roberta-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T03:03:05Z | ---
base_model: klue/roberta-large
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: sts_roberta-large_lr1e-05_wd1e-03_ep7_ckpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sts_roberta-large_lr1e-05_wd1e-03_ep7_ckpt
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3621
- Mse: 0.3621
- Mae: 0.4438
- R2: 0.8342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 1.8712 | 1.0 | 183 | 0.5118 | 0.5118 | 0.5409 | 0.7656 |
| 0.1606 | 2.0 | 366 | 0.4621 | 0.4621 | 0.5142 | 0.7884 |
| 0.1111 | 3.0 | 549 | 0.4687 | 0.4687 | 0.5088 | 0.7854 |
| 0.0837 | 4.0 | 732 | 0.4317 | 0.4317 | 0.4906 | 0.8023 |
| 0.0681 | 5.0 | 915 | 0.4662 | 0.4662 | 0.5091 | 0.7865 |
| 0.0559 | 6.0 | 1098 | 0.3742 | 0.3742 | 0.4524 | 0.8286 |
| 0.0485 | 7.0 | 1281 | 0.3621 | 0.3621 | 0.4438 | 0.8342 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
LarryAIDraw/Goddess_of_Light_Avatar | LarryAIDraw | 2023-09-18T03:26:22Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-18T03:20:25Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/145816/tang-wutong-or-or-goddess-of-light-fusion-skill-avatar-or-soul-land-ii-or-douluo-dalu-ii-jueshi-tangmen-or-2-or-manhua |
LarryAIDraw/schwarz_arknights | LarryAIDraw | 2023-09-18T03:25:10Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-18T03:19:18Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/130905/schwarz-arknights |
guydebruyn/Reinforce-Copter2 | guydebruyn | 2023-09-18T03:24:04Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-18T03:24:01Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Copter2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -5.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
zeenfts/output_dir | zeenfts | 2023-09-18T03:17:42Z | 28 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-16T08:08:06Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: output_dir
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_dir
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2976
- Accuracy: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: reduce_lr_on_plateau
- num_epochs: 77
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 2 | 2.0706 | 0.15 |
| No log | 2.0 | 5 | 2.0309 | 0.2313 |
| No log | 2.8 | 7 | 1.9846 | 0.2562 |
| 1.9868 | 4.0 | 10 | 1.8915 | 0.4062 |
| 1.9868 | 4.8 | 12 | 1.8529 | 0.3125 |
| 1.9868 | 6.0 | 15 | 1.7422 | 0.4125 |
| 1.9868 | 6.8 | 17 | 1.6761 | 0.4313 |
| 1.6815 | 8.0 | 20 | 1.6310 | 0.4562 |
| 1.6815 | 8.8 | 22 | 1.5900 | 0.45 |
| 1.6815 | 10.0 | 25 | 1.5402 | 0.4313 |
| 1.6815 | 10.8 | 27 | 1.5018 | 0.5 |
| 1.4233 | 12.0 | 30 | 1.4620 | 0.4875 |
| 1.4233 | 12.8 | 32 | 1.4286 | 0.5062 |
| 1.4233 | 14.0 | 35 | 1.4045 | 0.5125 |
| 1.4233 | 14.8 | 37 | 1.3860 | 0.5312 |
| 1.2127 | 16.0 | 40 | 1.3571 | 0.5 |
| 1.2127 | 16.8 | 42 | 1.3293 | 0.5375 |
| 1.2127 | 18.0 | 45 | 1.3742 | 0.4813 |
| 1.2127 | 18.8 | 47 | 1.3151 | 0.5437 |
| 1.0075 | 20.0 | 50 | 1.3053 | 0.5312 |
| 1.0075 | 20.8 | 52 | 1.3266 | 0.5375 |
| 1.0075 | 22.0 | 55 | 1.2964 | 0.5312 |
| 1.0075 | 22.8 | 57 | 1.2278 | 0.5875 |
| 0.8232 | 24.0 | 60 | 1.2501 | 0.5563 |
| 0.8232 | 24.8 | 62 | 1.2330 | 0.575 |
| 0.8232 | 26.0 | 65 | 1.2198 | 0.5625 |
| 0.8232 | 26.8 | 67 | 1.2071 | 0.5875 |
| 0.6738 | 28.0 | 70 | 1.2643 | 0.5875 |
| 0.6738 | 28.8 | 72 | 1.2594 | 0.5563 |
| 0.6738 | 30.0 | 75 | 1.2263 | 0.5312 |
| 0.6738 | 30.8 | 77 | 1.3218 | 0.5188 |
| 0.5715 | 32.0 | 80 | 1.2593 | 0.5312 |
| 0.5715 | 32.8 | 82 | 1.2214 | 0.5625 |
| 0.5715 | 34.0 | 85 | 1.3060 | 0.55 |
| 0.5715 | 34.8 | 87 | 1.2727 | 0.5563 |
| 0.4523 | 36.0 | 90 | 1.2749 | 0.5375 |
| 0.4523 | 36.8 | 92 | 1.3570 | 0.5437 |
| 0.4523 | 38.0 | 95 | 1.2815 | 0.5687 |
| 0.4523 | 38.8 | 97 | 1.2233 | 0.6062 |
| 0.3971 | 40.0 | 100 | 1.2097 | 0.6 |
| 0.3971 | 40.8 | 102 | 1.2881 | 0.5813 |
| 0.3971 | 42.0 | 105 | 1.2400 | 0.575 |
| 0.3971 | 42.8 | 107 | 1.3140 | 0.5375 |
| 0.3616 | 44.0 | 110 | 1.1525 | 0.6125 |
| 0.3616 | 44.8 | 112 | 1.2725 | 0.5938 |
| 0.3616 | 46.0 | 115 | 1.2634 | 0.5813 |
| 0.3616 | 46.8 | 117 | 1.2299 | 0.6 |
| 0.338 | 48.0 | 120 | 1.3408 | 0.5375 |
| 0.338 | 48.8 | 122 | 1.1931 | 0.5938 |
| 0.338 | 50.0 | 125 | 1.2806 | 0.5938 |
| 0.338 | 50.8 | 127 | 1.2410 | 0.575 |
| 0.3445 | 52.0 | 130 | 1.2901 | 0.5813 |
| 0.3445 | 52.8 | 132 | 1.2504 | 0.6062 |
| 0.3445 | 54.0 | 135 | 1.1614 | 0.5875 |
| 0.3445 | 54.8 | 137 | 1.2247 | 0.6062 |
| 0.3299 | 56.0 | 140 | 1.2591 | 0.5625 |
| 0.3299 | 56.8 | 142 | 1.2629 | 0.5687 |
| 0.3299 | 58.0 | 145 | 1.2369 | 0.5938 |
| 0.3299 | 58.8 | 147 | 1.2771 | 0.575 |
| 0.3292 | 60.0 | 150 | 1.3284 | 0.5875 |
| 0.3292 | 60.8 | 152 | 1.2550 | 0.5625 |
| 0.3292 | 61.6 | 154 | 1.3047 | 0.55 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/hayasaka_mirei_idolmastercinderellagirls | CyberHarem | 2023-09-18T03:11:12Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/hayasaka_mirei_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-18T02:48:28Z | ---
license: mit
datasets:
- CyberHarem/hayasaka_mirei_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of hayasaka_mirei_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4320, you need to download `4320/hayasaka_mirei_idolmastercinderellagirls.pt` as the embedding and `4320/hayasaka_mirei_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4320**, with the score of 0.973. The trigger words are:
1. `hayasaka_mirei_idolmastercinderellagirls`
2. `purple_hair, eyepatch, multicolored_hair, brown_eyes, short_hair, blush, red_hair, streaked_hair, open_mouth, fang, heart, hair_between_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8100 | 0.966 | [Download](8100/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.969 | [Download](7560/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.957 | [Download](7020/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.968 | [Download](6480/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.970 | [Download](5940/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5400 | 0.907 | [Download](5400/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.972 | [Download](4860/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| **4320** | **0.973** | [**Download**](4320/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.964 | [Download](3780/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3240 | 0.961 | [Download](3240/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.958 | [Download](2700/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.968 | [Download](2160/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.972 | [Download](1620/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.959 | [Download](1080/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.929 | [Download](540/hayasaka_mirei_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
EldritchAdam/LaxpeintXL | EldritchAdam | 2023-09-18T03:10:49Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2023-09-04T19:16:06Z | ---
license: openrail
---
<div><p><strong><span style="color:rgb(250, 82, 82)">LaxpeintXL - tentatively final version for SDXL 1.0</span></strong></p>
<p>This model is a companion to <a target="_blank" rel="ugc" href="https://huggingface.co/EldritchAdam/ClassipeintXL">ClassipeintXL</a>. Although I see ClassipeintXL as really crucial to SDXL (and how I use it) LaxpeintXL is not so obviously necessary. You can get much of this style with the right combination of artist names and aesthetic terms. So why use a LoRA?</p>
<p>As much as SDXL is a huge leap forward from SD2, it shares a failing - albeit to a much lesser extent - that keeping an aesthetic consistent is very difficult. The same terms and artist names will not have the same effect for a portrait as for a landscape or a sci-fi scene etc.</p>
<p>This LoRA helps you to more consistently get that slick digital paint style in every image. Prompt for whatever you want, it's going to be beautiful.</p>
<p><strong><em><span style="color:rgb(190, 75, 219)">Recommended settings for use:</span></em></strong></p><p><a target="_blank" rel="ugc" href="https://pastebin.com/tXKwTkxC"><strong><em><span style="color:rgb(76, 110, 245)">You can go here (pastebin) to download a ComfyUI workflow</span></em></strong></a><span style="color:rgb(34, 139, 230)"> like what I used, but without custom nodes that are embedded in my image uploads on CivitAI.</span></p>
<ul>
<li>
<p>Start with a full 1.0 LoRA strength and adjust down to 0.7 or 0.8 for a subtler painterly effect. You can adjust upward (to 1.2 or maybe a little more) to maximize the painterly appearance, but it can start to introduce some quirks</p>
</li>
<li>
<p>Use the LoRA with your preferred SDXL model with no refiner. I have so far just stuck with base SDXL1.0 but other finetunes work great as well.</p>
</li>
<li>
<p>I recommend the DPM samplers, but use your favorite. Some may produce softer painting styles that don't suit my taste as much but whatever you prefer is great.</p>
</li>
<li>
<p>Don't do anything special for your prompt - just describe what you want to see. You don't really need to use any keywords unless some subject matter seems to override the LoRA's style, then you can bring it back in line by using the terms "digital painting of..." and "by LaxpeintXL".</p>
</li>
</ul>
</div>
<div style="max-width:500px">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/0B4gg9e6HNzYI-2dJzIZH.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/gH9bA1TDD2S_bJzheUXr_.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/cu0EyW4eOqr9iVhTN2Cgc.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/o0El5-8ms0J-Ae1gqNi71.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/CbnMKPkqAXM4st88RqXmj.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/mCmmJXYUmD8QamftYjWuQ.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/z4DXgHzHjKbh1mkfW7ur_.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/YdvSPWp38oa-JZgEqnEfp.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/zR1huUXvEl7b6kFdbuxRg.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/jiFLLFahcoE72BcjFKuws.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/8JB6sAgRnaHJ5jsgTpHki.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/LJQJw0V1E3NCdVEMUwgW7.png">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63169de2f5e32157c5226974/LyZL9NLV2mSxQtQae4trO.png">
</div> |
GAS17/fgdpersn | GAS17 | 2023-09-18T03:07:47Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:GAS17/fgdperson",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-18T02:33:17Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: fgd person
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: false
datasets:
- GAS17/fgdperson
---
# LoRA DreamBooth - GAS17/fgdpersn
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained on the concept prompt:
`fgd person`
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
# This is where you load your trained weights
pipe.load_lora_weights('GAS17/fgdpersn')
pipe.to("cuda")
prompt = "A majestic fgd person jumping from a big stone at night"
image = pipe(prompt=prompt, num_inference_steps=50).images[0]
```
|
ys7yoo/sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep5_ckpt | ys7yoo | 2023-09-18T02:54:33Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3",
"base_model:finetune:ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T02:25:10Z | ---
base_model: ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3
tags:
- generated_from_trainer
datasets:
- klue
model-index:
- name: sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep5_ckpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sts_nli_roberta-large_lr1e-05_wd1e-03_ep3_lr1e-05_wd1e-03_ep5_ckpt
This model is a fine-tuned version of [ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3](https://huggingface.co/ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3191
- Mse: 0.3191
- Mae: 0.4161
- R2: 0.8539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 1.0641 | 1.0 | 183 | 0.5074 | 0.5074 | 0.5341 | 0.7676 |
| 0.1359 | 2.0 | 366 | 0.3199 | 0.3199 | 0.4232 | 0.8535 |
| 0.0958 | 3.0 | 549 | 0.3589 | 0.3589 | 0.4349 | 0.8356 |
| 0.0748 | 4.0 | 732 | 0.3385 | 0.3385 | 0.4284 | 0.8450 |
| 0.0617 | 5.0 | 915 | 0.3191 | 0.3191 | 0.4161 | 0.8539 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
AyanKumarBhunia/textual_inversion_cat | AyanKumarBhunia | 2023-09-18T02:49:48Z | 30 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-18T02:21:58Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - AyanKumarBhunia/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
huyen89/taxi-v3 | huyen89 | 2023-09-18T02:33:55Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-08-23T01:57:15Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="huyen89/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hegelty/KcBERT-Base-finetuned-hate | hegelty | 2023-09-18T02:30:55Z | 113 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"ko",
"license:bsd",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T04:23:36Z | ---
license: bsd
language:
- ko
library_name: transformers
---
# 혐오표현 분류
tag 0: 혐오
tag 1: 일반
# 소스코드
https://github.com/hegelty/hate-classifier
# 데이터셋
https://github.com/smilegate-ai/korean_unsmile_dataset
|
ShivamMangale/XLM-Roberta-base-finetuned-squad-syn-first | ShivamMangale | 2023-09-18T02:21:07Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-18T01:29:23Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-finetuned-squad-syn-first
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-finetuned-squad-syn-first
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
wu981526092/Sentence-Level-Stereotype-Detector | wu981526092 | 2023-09-18T01:49:58Z | 15,593 | 4 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:stereoset",
"dataset:crows_pairs",
"dataset:wu981526092/MGSD",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-29T16:02:37Z | ---
license: mit
datasets:
- stereoset
- crows_pairs
- wu981526092/MGSD
language:
- en
metrics:
- f1
- recall
- precision
- accuracy
---
# Sentence-Level Stereotype Classifier
The Sentence-Level Stereotype Classifier is a transformer-based model developed to detect and classify different types of stereotypes present in the text at the sentence level. It is designed to recognize stereotypical and anti-stereotypical stereotypes towards gender, race, profession, and religion. The model can help in developing applications aimed at mitigating Stereotypical language use and promoting fairness and inclusivity in natural language processing tasks.
## Model Architecture
The model is built using the pre-trained Distilbert model. It is fine-tuned on MGS Dataset for the task of sentence-level stereotype classification.
## Classes
The model identifies nine classes, including:
0. unrelated: The token does not indicate any stereotype.
1. stereotype_gender: The token indicates a gender stereotype.
2. anti-stereotype_gender: The token indicates an anti-gender stereotype.
3. stereotype_race: The token indicates a racial stereotype.
4. anti-stereotype_race: The token indicates an anti-racial stereotype.
5. stereotype_profession: The token indicates a professional stereotype.
6. anti-stereotype_profession: The token indicates an anti-professional stereotype.
7. stereotype_religion: The token indicates a religious stereotype.
8. anti-stereotype_religion: The token indicates an anti-religious stereotype.
## Usage
The model can be used as a part of the Hugging Face's pipeline for Text Classification.
```python
from transformers import pipeline
nlp = pipeline("text-classification", model="wu981526092/Sentence-Level-Stereotype-Detector", tokenizer="wu981526092/Sentence-Level-Stereotype-Detector")
result = nlp("Text containing potential stereotype...")
print(result)
``` |
wu981526092/Token-Level-Stereotype-Detector | wu981526092 | 2023-09-18T01:48:45Z | 110 | 2 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:stereoset",
"dataset:crows_pairs",
"dataset:wu981526092/MGSD",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-06-24T10:21:27Z | ---
license: mit
datasets:
- stereoset
- crows_pairs
- wu981526092/MGSD
language:
- en
metrics:
- f1
- recall
- precision
- accuracy
---
# Token-Level Stereotype Classifier
The Token-Level Stereotype Classifier is a transformer-based model developed to detect and classify different types of stereotypes present in the text at the token level. It is designed to recognize stereotypical and anti-stereotypical stereotypes towards gender, race, profession, and religion. The model can help in developing applications aimed at mitigating stereotypical language use and promoting fairness and inclusivity in natural language processing tasks.
## Model Architecture
The model is built using the pretrained Distilbert model. It is fine-tuned on MGS Dataset for the task of token-level classification.
## Classes
The model identifies nine classes, including:
1. unrelated: The token does not indicate any stereotype.
2. stereotype_gender: The token indicates a gender stereotype.
3. anti-stereotype_gender: The token indicates an anti-gender stereotype.
4. stereotype_race: The token indicates a racial stereotype.
5. anti-stereotype_race: The token indicates an anti-racial stereotype.
6. stereotype_profession: The token indicates a professional stereotype.
7. anti-stereotype_profession: The token indicates an anti-professional stereotype.
8. stereotype_religion: The token indicates a religious stereotype.
9. anti-stereotype_religion: The token indicates an anti-religious stereotype.
## Usage
The model can be used as a part of the Hugging Face's pipeline for Named Entity Recognition (NER).
```python
from transformers import pipeline
nlp = pipeline("ner", model="wu981526092/Token-Level-Stereotype-Detector", tokenizer="wu981526092/Token-Level-Stereotype-Detector")
result = nlp("Text containing potential stereotype...")
print(result)
``` |
kiranahp/indobert_qa_skripsi_big | kiranahp | 2023-09-18T01:32:54Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-18T01:20:42Z | ---
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: indobert_qa_skripsi_big
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert_qa_skripsi_big
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6607 | 1.0 | 4101 | 1.7878 |
| 1.3687 | 2.0 | 8202 | 1.7563 |
| 1.1822 | 3.0 | 12303 | 1.8011 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
KETI-AIR-Downstream/long-ke-t5-base-summarization_e10 | KETI-AIR-Downstream | 2023-09-18T01:28:33Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:jsonl_dataset_sum.py",
"base_model:KETI-AIR/long-ke-t5-base",
"base_model:finetune:KETI-AIR/long-ke-t5-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-06-05T04:24:59Z | ---
tags:
- generated_from_trainer
datasets:
- jsonl_dataset_sum.py
metrics:
- rouge
widget:
- text: 'summarization-num_lines-1: 현대자동차는 18일(현지 시간) 이탈리아 레이크 코모에서 개최된 ''현대 리유니온''
행사에서 ''포니 쿠페 콘셉트'' 복원 모델을 세계에 첫 공개했습니다. 이 프로젝트는 현대차의 창업자인 정주영 선대 회장의 수출보국(輸出報國)
정신과 포니 쿠페를 통한 글로벌 브랜드 정립에 대한 끊임없는 열정과 도전 정신을 재조명하기 위한 것입니다. 현대차에 따르면, 이번 현대 리유니온
행사는 회사의 역사를 다시 돌아보며 변하지 않는 미래 지향적인 비전과 방향성을 공유하는 브랜드 유산 행사입니다.'
example_title: sample 1
base_model: KETI-AIR/long-ke-t5-base
model-index:
- name: summarization_all
results:
- task:
type: summarization
name: Summarization
dataset:
name: jsonl_dataset_sum.py
type: jsonl_dataset_sum.py
config: 'null'
split: None
metrics:
- type: rouge
value: 21.9857
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization_all
This model is a fine-tuned version of [KETI-AIR/long-ke-t5-base](https://huggingface.co/KETI-AIR/long-ke-t5-base) on the jsonl_dataset_sum.py dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1442
- Rouge1: 21.9857
- Rouge2: 10.2876
- Rougel: 21.4026
- Rougelsum: 21.4278
- Gen Len: 86.2560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.2503 | 1.0 | 184670 | 1.2439 | 20.2525 | 9.1467 | 19.7454 | 19.771 | 87.1766 |
| 1.1629 | 2.0 | 369340 | 1.1773 | 21.0068 | 9.6691 | 20.4565 | 20.4888 | 89.6074 |
| 1.1087 | 3.0 | 554010 | 1.1431 | 21.0216 | 9.6545 | 20.489 | 20.5108 | 85.5895 |
| 1.056 | 4.0 | 738680 | 1.1247 | 21.6776 | 10.1424 | 21.09 | 21.1168 | 89.6576 |
| 1.0199 | 5.0 | 923350 | 1.1179 | 21.6563 | 10.0965 | 21.0814 | 21.1056 | 89.2454 |
| 0.9652 | 6.0 | 1108020 | 1.1122 | 21.6209 | 10.0725 | 21.0623 | 21.0864 | 86.7079 |
| 0.92 | 7.0 | 1292690 | 1.1136 | 21.9396 | 10.2734 | 21.3465 | 21.3745 | 86.5547 |
| 0.8804 | 8.0 | 1477360 | 1.1228 | 21.8457 | 10.1858 | 21.2552 | 21.278 | 87.6413 |
| 0.8447 | 9.0 | 1662030 | 1.1327 | 21.92 | 10.2635 | 21.3415 | 21.3633 | 86.4453 |
| 0.7678 | 10.0 | 1846700 | 1.1442 | 21.9857 | 10.2876 | 21.4026 | 21.4278 | 86.2560 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
KETI-AIR-Downstream/long-ke-t5-base-translation-aihub-en2ko | KETI-AIR-Downstream | 2023-09-18T01:27:39Z | 159 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"translation",
"en",
"ko",
"base_model:KETI-AIR/long-ke-t5-base",
"base_model:finetune:KETI-AIR/long-ke-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-04-28T14:19:27Z | ---
language:
- en
- ko
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- KETI-AIR/aihub_koenzh_food_translation,KETI-AIR/aihub_scitech_translation,KETI-AIR/aihub_scitech20_translation,KETI-AIR/aihub_socialtech20_translation,KETI-AIR/aihub_spoken_language_translation
metrics:
- bleu
pipeline_tag: translation
widget:
- text: 'translate_en2ko: The Seoul Metropolitan Government said Wednesday that it
would develop an AI-based congestion monitoring system to provide better information
to passengers about crowd density at each subway station.'
example_title: Sample 1
- text: 'translate_en2ko: According to Seoul Metro, the operator of the subway service
in Seoul, the new service will help analyze the real-time flow of passengers and
crowd levels in subway compartments, improving operational efficiency.'
example_title: Sample 2
base_model: KETI-AIR/long-ke-t5-base
model-index:
- name: en2ko
results:
- task:
type: translation
name: Translation
dataset:
name: KETI-AIR/aihub_koenzh_food_translation,KETI-AIR/aihub_scitech_translation,KETI-AIR/aihub_scitech20_translation,KETI-AIR/aihub_socialtech20_translation,KETI-AIR/aihub_spoken_language_translation
koen,none,none,none,none
type: KETI-AIR/aihub_koenzh_food_translation,KETI-AIR/aihub_scitech_translation,KETI-AIR/aihub_scitech20_translation,KETI-AIR/aihub_socialtech20_translation,KETI-AIR/aihub_spoken_language_translation
args: koen,none,none,none,none
metrics:
- type: bleu
value: 42.463
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en2ko
This model is a fine-tuned version of [KETI-AIR/long-ke-t5-base](https://huggingface.co/KETI-AIR/long-ke-t5-base) on the KETI-AIR/aihub_koenzh_food_translation,KETI-AIR/aihub_scitech_translation,KETI-AIR/aihub_scitech20_translation,KETI-AIR/aihub_socialtech20_translation,KETI-AIR/aihub_spoken_language_translation koen,none,none,none,none dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6000
- Bleu: 42.463
- Gen Len: 30.6512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 0.6989 | 1.0 | 93762 | 0.6666 | 20.3697 | 18.1258 |
| 0.6143 | 2.0 | 187524 | 0.6181 | 21.2903 | 18.1428 |
| 0.5544 | 3.0 | 281286 | 0.6000 | 21.9763 | 18.1424 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.0
- Datasets 2.8.0
- Tokenizers 0.13.2 |
KETI-AIR/ke-t5-large | KETI-AIR | 2023-09-18T01:24:55Z | 102 | 8 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z | ---
license: apache-2.0
language: [en, ko]
tags:
- t5
eos_token: "</s>"
widget:
- text: 아버지가 방에 들어가신다.</s>
---
# ke-t5 base
Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details.
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("KETI-AIR/ke-t5-large")
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-large")
```
## BibTeX entry and citation info
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
``` |
KETI-AIR/ke-t5-base-ko | KETI-AIR | 2023-09-18T01:24:34Z | 378 | 7 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"ko",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:04Z |
---
language: ko
license: apache-2.0
tags:
- t5
eos_token: </s>
widget:
- text: 아버지가 방에 들어가신다.</s>
---
# Model Card for ke-t5-base-ko
# Model Details
## Model Description
- **Developed by:** Korea Electronics Technology Institute Artificial Intelligence Research Center
- **Shared by [Optional]:** More information needed
- **Model type:** Text2Text Generation
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Related Models:**
- **Parent Model:** T5
- **Resources for more information:**
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- [KE-T5 Github Repo](https://github.com/AIRC-KETI/ke-t5)
- [Paper](https://aclanthology.org/2021.findings-emnlp.33/)
- [Associated Paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
# Uses
## Direct Use
This model can be used for the task of Text2Text Generation
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
See the [t5-base model card](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) for further information.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
```
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
```
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Korea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-base-ko")
model = AutoModelForSeq2SeqLM.from_pretrained("KETI-AIR/ke-t5-base-ko")
```
</details>
|
Navu45/neon_sd_model | Navu45 | 2023-09-18T01:14:45Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-18T00:02:58Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Navu45/neon_sd_model
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Navu45/neon_dreambooth dataset. You can find some example images in the following.




|
ys7yoo/nli_roberta-large_lr1e-05_wd1e-03_ep3_ckpt | ys7yoo | 2023-09-18T01:08:41Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"base_model:klue/roberta-large",
"base_model:finetune:klue/roberta-large",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-18T00:46:19Z | ---
base_model: klue/roberta-large
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- accuracy
- f1
model-index:
- name: nli_roberta-large_lr1e-05_wd1e-03_ep3_ckpt
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
config: nli
split: validation
args: nli
metrics:
- name: Accuracy
type: accuracy
value: 0.9026666666666666
- name: F1
type: f1
value: 0.9025716877431428
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli_roberta-large_lr1e-05_wd1e-03_ep3_ckpt
This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3425
- Accuracy: 0.9027
- F1: 0.9026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5725 | 1.0 | 391 | 0.3381 | 0.8813 | 0.8811 |
| 0.2182 | 2.0 | 782 | 0.3055 | 0.898 | 0.8979 |
| 0.112 | 3.0 | 1173 | 0.3425 | 0.9027 | 0.9026 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
natsusakiyomi/momijimix-xl | natsusakiyomi | 2023-09-18T00:51:44Z | 0 | 2 | null | [
"license:openrail++",
"region:us"
]
| null | 2023-09-17T21:06:41Z | ---
license: openrail++
---
License
[CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
〇Use the model without crediting the creator<br>
〇Sell images they generate<br>
〇Run on services that generate images for money<br>
〇Share merges using this model<br>
×Sell this model or merges using this model<br>
×Have different permissions when sharing merges<br> |
TrevorJS/mtg-phi-1_5-dpo-qlora | TrevorJS | 2023-09-18T00:31:30Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
]
| null | 2023-09-18T00:20:06Z | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Rewards/chosen: -7.5874
- Rewards/rejected: -24.0497
- Rewards/accuracies: 1.0
- Rewards/margins: 16.4623
- Logps/rejected: -274.3435
- Logps/chosen: -143.2090
- Logits/rejected: -1.8100
- Logits/chosen: -1.4786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.0417 | 0.07 | 100 | 0.0418 | -0.3892 | -8.0118 | 0.9792 | 7.6226 | -113.9640 | -71.2264 | 1.8258 | 1.7898 |
| 0.0221 | 0.15 | 200 | 0.0303 | -2.5657 | -10.9212 | 0.9896 | 8.3555 | -143.0585 | -92.9920 | 1.9704 | 2.1047 |
| 0.0107 | 0.22 | 300 | 0.0131 | -1.7388 | -11.6047 | 0.9965 | 9.8659 | -149.8935 | -84.7232 | 1.0731 | 0.9750 |
| 0.0204 | 0.29 | 400 | 0.0108 | -2.0131 | -11.9647 | 0.9965 | 9.9516 | -153.4932 | -87.4658 | 1.3610 | 1.6740 |
| 0.0067 | 0.36 | 500 | 0.0080 | -5.9488 | -19.6561 | 0.9974 | 13.7073 | -230.4076 | -126.8228 | -0.4464 | -0.2114 |
| 0.0 | 0.44 | 600 | 0.0047 | -5.6456 | -20.2381 | 0.9983 | 14.5924 | -236.2268 | -123.7909 | -0.4142 | -0.0244 |
| 0.0003 | 0.51 | 700 | 0.0018 | -7.2250 | -21.3351 | 0.9991 | 14.1101 | -247.1974 | -139.5853 | -0.3510 | -0.0203 |
| 0.0005 | 0.58 | 800 | 0.0008 | -7.2263 | -21.2475 | 0.9991 | 14.0211 | -246.3209 | -139.5981 | -0.8673 | -0.7010 |
| 0.0 | 0.66 | 900 | 0.0009 | -10.2371 | -26.0402 | 0.9991 | 15.8031 | -294.2486 | -169.7062 | -1.9784 | -1.7799 |
| 0.0 | 0.73 | 1000 | 0.0008 | -5.9544 | -22.0767 | 0.9991 | 16.1223 | -254.6137 | -126.8789 | -1.0623 | -0.6039 |
| 0.0 | 0.8 | 1100 | 0.0007 | -7.3374 | -23.8700 | 0.9991 | 16.5327 | -272.5467 | -140.7083 | -1.5517 | -1.1710 |
| 0.0 | 0.87 | 1200 | 0.0007 | -7.6398 | -24.1605 | 0.9991 | 16.5207 | -275.4509 | -143.7327 | -1.8124 | -1.4901 |
| 0.0 | 0.95 | 1300 | 0.0001 | -7.5920 | -24.0476 | 1.0 | 16.4556 | -274.3220 | -143.2550 | -1.8115 | -1.4816 |
| 0.0001 | 1.02 | 1400 | 0.0001 | -7.5872 | -24.0480 | 1.0 | 16.4608 | -274.3262 | -143.2065 | -1.8102 | -1.4791 |
| 0.0 | 1.09 | 1500 | 0.0001 | -7.5874 | -24.0497 | 1.0 | 16.4623 | -274.3435 | -143.2090 | -1.8100 | -1.4786 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Evan-Lin/yelp-attractive-keyword-1 | Evan-Lin | 2023-09-18T00:07:04Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| reinforcement-learning | 2023-09-17T10:03:06Z | ---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpypedqoes/Evan-Lin/yelp-attractive-keyword-1")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpypedqoes/Evan-Lin/yelp-attractive-keyword-1")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpypedqoes/Evan-Lin/yelp-attractive-keyword-1")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Subsets and Splits