modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-12 18:27:22
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-12 18:26:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
CyberHarem/ogaki_chiaki_yurucamp | CyberHarem | 2023-09-26T19:03:45Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/ogaki_chiaki_yurucamp",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-26T07:19:28Z | ---
license: mit
datasets:
- CyberHarem/ogaki_chiaki_yurucamp
pipeline_tag: text-to-image
tags:
- art
---
# Lora of ogaki_chiaki_yurucamp
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4480, you need to download `4480/ogaki_chiaki_yurucamp.pt` as the embedding and `4480/ogaki_chiaki_yurucamp.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4480**, with the score of 0.954. The trigger words are:
1. `ogaki_chiaki_yurucamp`
2. `glasses, purple_hair, long_hair, brown_eyes, black-framed_eyewear, blue_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | pattern_20 | pattern_21 | pattern_22 | pattern_23 | pattern_24 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9600 | 0.904 | [Download](9600/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9600/previews/nude.png) | [<NSFW, click to see>](9600/previews/nude2.png) |  |  |
| 8960 | 0.938 | [Download](8960/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8960/previews/nude.png) | [<NSFW, click to see>](8960/previews/nude2.png) |  |  |
| 8320 | 0.913 | [Download](8320/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8320/previews/nude.png) | [<NSFW, click to see>](8320/previews/nude2.png) |  |  |
| 7680 | 0.928 | [Download](7680/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7680/previews/nude.png) | [<NSFW, click to see>](7680/previews/nude2.png) |  |  |
| 7040 | 0.948 | [Download](7040/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7040/previews/nude.png) | [<NSFW, click to see>](7040/previews/nude2.png) |  |  |
| 6400 | 0.951 | [Download](6400/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6400/previews/nude.png) | [<NSFW, click to see>](6400/previews/nude2.png) |  |  |
| 5760 | 0.948 | [Download](5760/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5120 | 0.953 | [Download](5120/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5120/previews/nude.png) | [<NSFW, click to see>](5120/previews/nude2.png) |  |  |
| **4480** | **0.954** | [**Download**](4480/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4480/previews/nude.png) | [<NSFW, click to see>](4480/previews/nude2.png) |  |  |
| 3840 | 0.932 | [Download](3840/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3200 | 0.964 | [Download](3200/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3200/previews/nude.png) | [<NSFW, click to see>](3200/previews/nude2.png) |  |  |
| 2560 | 0.944 | [Download](2560/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2560/previews/nude.png) | [<NSFW, click to see>](2560/previews/nude2.png) |  |  |
| 1920 | 0.949 | [Download](1920/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1280 | 0.941 | [Download](1280/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1280/previews/nude.png) | [<NSFW, click to see>](1280/previews/nude2.png) |  |  |
| 640 | 0.868 | [Download](640/ogaki_chiaki_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](640/previews/nude.png) | [<NSFW, click to see>](640/previews/nude2.png) |  |  |
|
DInaLong/videomae-base-finetuned-ucf101-subset | DInaLong | 2023-09-26T19:01:44Z | 62 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2023-08-14T15:25:58Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4435
- Accuracy: 0.8286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4588 | 0.25 | 150 | 1.1859 | 0.6286 |
| 0.415 | 1.25 | 300 | 0.9017 | 0.6714 |
| 0.3556 | 2.25 | 450 | 0.8084 | 0.7143 |
| 0.0322 | 3.25 | 600 | 0.4435 | 0.8286 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
vgarg/my-fw9-identification-model-e5_large_v2 | vgarg | 2023-09-26T18:59:24Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-09-26T18:55:28Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# vgarg/my-fw9-identification-model-e5_large_v2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("vgarg/my-fw9-identification-model-e5_large_v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
MattStammers/poca-SoccerTwos | MattStammers | 2023-09-26T18:47:30Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-09-09T15:03:00Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- unity-ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MattStammers/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
### Video
This video is of the Unity baseline agent (blue) against my agents (purple). The Unity baseline agents are slightly better but only marginally so. |
prateeky2806/bert-base-uncased-sst2-epochs-2-lr-0.0001 | prateeky2806 | 2023-09-26T18:31:17Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-26T18:20:00Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-sst2-epochs-2-lr-0.0001
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: sst2
split: train
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.99
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2-epochs-2-lr-0.0001
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0665
- Accuracy: 0.99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1932 | 1.0 | 2102 | 0.0753 | 0.99 |
| 0.1085 | 2.0 | 4204 | 0.0665 | 0.99 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
prateeky2806/bert-base-uncased-sst2-ia3-epochs-2-lr-0.005 | prateeky2806 | 2023-09-26T18:23:15Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
]
| null | 2023-09-26T18:15:08Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-sst2-ia3-epochs-2-lr-0.005
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst2-ia3-epochs-2-lr-0.005
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2209
- Accuracy: 0.95
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2126 | 1.0 | 2102 | 0.2255 | 0.93 |
| 0.1757 | 2.0 | 4204 | 0.2209 | 0.95 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
johnathan32992/ChineseAmbatukamRVCv2 | johnathan32992 | 2023-09-26T18:19:25Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2023-07-16T19:10:08Z | ---
license: openrail
---
# Nissan Man / Chinese Dreamybull / Chinese Ambatukam
## 1 minute 5 seconds from Reddit (i ain't linking it here)
#### [Bunda Rahma](https://huggingface.co/johnathan32992/BundaRahmaRVCv2)
#### [Kakangu](https://huggingface.co/johnathan32992/KakanguRVCv2) |
mehranmehr/ppo-Huggy | mehranmehr | 2023-09-26T18:09:11Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-09-26T18:09:05Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mehranmehr/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
IAteSpaghettiForLunch/DialoGPT-medium-GLADoS | IAteSpaghettiForLunch | 2023-09-26T17:58:56Z | 138 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-26T16:38:46Z | ---
license: cc-by-nc-nd-4.0
pipeline_tag: conversational
language:
- en
--- |
CyberHarem/kagamihara_nadeshiko_yurucamp | CyberHarem | 2023-09-26T17:43:47Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/kagamihara_nadeshiko_yurucamp",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-26T05:56:06Z | ---
license: mit
datasets:
- CyberHarem/kagamihara_nadeshiko_yurucamp
pipeline_tag: text-to-image
tags:
- art
---
# Lora of kagamihara_nadeshiko_yurucamp
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4960, you need to download `4960/kagamihara_nadeshiko_yurucamp.pt` as the embedding and `4960/kagamihara_nadeshiko_yurucamp.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4960**, with the score of 0.979. The trigger words are:
1. `kagamihara_nadeshiko_yurucamp`
2. `pink_hair, long_hair, hair_between_eyes, blue_eyes, closed_mouth, smile`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9300 | 0.977 | [Download](9300/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9300/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](9300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9300/previews/nude.png) | [<NSFW, click to see>](9300/previews/nude2.png) |  |  |
| 8680 | 0.974 | [Download](8680/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8680/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](8680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8680/previews/nude.png) | [<NSFW, click to see>](8680/previews/nude2.png) |  |  |
| 8060 | 0.976 | [Download](8060/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8060/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](8060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8060/previews/nude.png) | [<NSFW, click to see>](8060/previews/nude2.png) |  |  |
| 7440 | 0.977 | [Download](7440/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7440/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](7440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7440/previews/nude.png) | [<NSFW, click to see>](7440/previews/nude2.png) |  |  |
| 6820 | 0.967 | [Download](6820/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6820/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](6820/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6820/previews/nude.png) | [<NSFW, click to see>](6820/previews/nude2.png) |  |  |
| 6200 | 0.973 | [Download](6200/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6200/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](6200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6200/previews/nude.png) | [<NSFW, click to see>](6200/previews/nude2.png) |  |  |
| 5580 | 0.934 | [Download](5580/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5580/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](5580/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5580/previews/nude.png) | [<NSFW, click to see>](5580/previews/nude2.png) |  |  |
| **4960** | **0.979** | [**Download**](4960/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4960/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](4960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4960/previews/nude.png) | [<NSFW, click to see>](4960/previews/nude2.png) |  |  |
| 4340 | 0.972 | [Download](4340/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4340/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](4340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4340/previews/nude.png) | [<NSFW, click to see>](4340/previews/nude2.png) |  |  |
| 3720 | 0.935 | [Download](3720/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3720/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](3720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3720/previews/nude.png) | [<NSFW, click to see>](3720/previews/nude2.png) |  |  |
| 3100 | 0.969 | [Download](3100/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3100/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](3100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3100/previews/nude.png) | [<NSFW, click to see>](3100/previews/nude2.png) |  |  |
| 2480 | 0.901 | [Download](2480/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2480/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](2480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2480/previews/nude.png) | [<NSFW, click to see>](2480/previews/nude2.png) |  |  |
| 1860 | 0.961 | [Download](1860/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1860/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](1860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1860/previews/nude.png) | [<NSFW, click to see>](1860/previews/nude2.png) |  |  |
| 1240 | 0.962 | [Download](1240/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1240/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](1240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1240/previews/nude.png) | [<NSFW, click to see>](1240/previews/nude2.png) |  |  |
| 620 | 0.877 | [Download](620/kagamihara_nadeshiko_yurucamp.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](620/previews/pattern_14.png) |  |  |  |  |  | [<NSFW, click to see>](620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](620/previews/nude.png) | [<NSFW, click to see>](620/previews/nude2.png) |  |  |
|
zineddine/taxi-v3 | zineddine | 2023-09-26T17:43:44Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-26T17:43:41Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zineddine/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
testing244/t5_recommendation_sports_equipment_english | testing244 | 2023-09-26T17:43:06Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-26T17:33:59Z | ---
license: apache-2.0
base_model: t5-large
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5_recommendation_sports_equipment_english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_recommendation_sports_equipment_english
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4020
- Rouge1: 57.9365
- Rouge2: 47.6190
- Rougel: 57.9365
- Rougelsum: 57.9365
- Gen Len: 4.1429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 0.96 | 6 | 7.7857 | 20.2721 | 10.3896 | 20.0454 | 20.9524 | 11.3810 |
| No log | 1.92 | 12 | 3.1922 | 20.0 | 4.7619 | 20.4762 | 20.4762 | 3.1905 |
| No log | 2.88 | 18 | 0.8028 | 5.5556 | 0.0 | 5.5556 | 5.5556 | 3.0 |
| No log | 4.0 | 25 | 0.7207 | 32.8571 | 19.0476 | 32.9365 | 34.0476 | 3.2381 |
| No log | 4.96 | 31 | 0.5217 | 50.3968 | 42.8571 | 50.0 | 50.7937 | 3.9524 |
| No log | 5.92 | 37 | 0.4420 | 57.9365 | 47.6190 | 57.9365 | 57.9365 | 4.0476 |
| No log | 6.88 | 43 | 0.4694 | 67.4603 | 61.9048 | 67.4603 | 67.4603 | 4.0 |
| No log | 8.0 | 50 | 0.4408 | 57.9365 | 47.6190 | 57.9365 | 57.9365 | 4.1429 |
| No log | 8.96 | 56 | 0.4269 | 57.9365 | 47.6190 | 57.9365 | 57.9365 | 4.1429 |
| No log | 9.6 | 60 | 0.4020 | 57.9365 | 47.6190 | 57.9365 | 57.9365 | 4.1429 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.8.0
- Tokenizers 0.13.3
|
zineddine/q-FrozenLake-v1-4x4-noSlippery | zineddine | 2023-09-26T17:39:37Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-26T17:39:35Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="zineddine/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
luisgasco/biomedical-roberta-finetuned-iomed_task | luisgasco | 2023-09-26T17:38:07Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:PlanTL-GOB-ES/roberta-base-biomedical-es",
"base_model:finetune:PlanTL-GOB-ES/roberta-base-biomedical-es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-26T15:38:06Z | ---
license: apache-2.0
base_model: PlanTL-GOB-ES/roberta-base-biomedical-es
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biomedical-roberta-finetuned-iomed_task
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biomedical-roberta-finetuned-iomed_task
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0582
- Precision: 0.2269
- Recall: 0.4283
- F1: 0.2966
- Accuracy: 0.7695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.1e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.2536 | 2.0 | 1520 | 1.2135 | 0.1082 | 0.2685 | 0.1542 | 0.7422 |
| 1.0249 | 4.0 | 3040 | 1.0510 | 0.1448 | 0.3244 | 0.2002 | 0.7650 |
| 0.9 | 6.0 | 4560 | 1.0098 | 0.1587 | 0.3512 | 0.2186 | 0.7694 |
| 0.8002 | 8.0 | 6080 | 1.0143 | 0.1835 | 0.3795 | 0.2474 | 0.7664 |
| 0.7195 | 10.0 | 7600 | 1.0173 | 0.2007 | 0.4055 | 0.2685 | 0.7691 |
| 0.693 | 12.0 | 9120 | 1.0218 | 0.1991 | 0.4079 | 0.2676 | 0.7683 |
| 0.6139 | 14.0 | 10640 | 1.0394 | 0.2063 | 0.4071 | 0.2738 | 0.7672 |
| 0.616 | 16.0 | 12160 | 1.0376 | 0.2141 | 0.4142 | 0.2823 | 0.7695 |
| 0.5911 | 18.0 | 13680 | 1.0491 | 0.2240 | 0.4268 | 0.2938 | 0.7697 |
| 0.6042 | 20.0 | 15200 | 1.0582 | 0.2269 | 0.4283 | 0.2966 | 0.7695 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
johnpaulbin/toxic-gte-small-1 | johnpaulbin | 2023-09-26T17:34:58Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-09-26T17:34:39Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# johnpaulbin/toxic-gte-small-1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("johnpaulbin/toxic-gte-small-1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
johnathan32992/TeresaTengRVCv1 | johnathan32992 | 2023-09-26T17:24:55Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2023-06-23T18:44:11Z | ---
license: openrail
---
# Teresa Teng / 鄧麗君
## 24 minutes 18 seconds of data from [เติ้งลี่จวิน รำลึก 25 ปี](https://www.youtube.com/watch?v=MlLIk71h7ik&t=1007s&ab_channel=monairuektavilchai) on YouTube.
Note: This model has its dataset from YouTube the audio is compressed and the audio **not** being de-echoed it is very bad at pronunciation. |
hdeldar/llama-2-7b-persian-text-1k-1 | hdeldar | 2023-09-26T17:02:41Z | 19 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"pythorch",
"en",
"fa",
"dataset:hdeldar/Persian-Text-llama2-1k-1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-26T16:19:46Z | ---
license: apache-2.0
datasets:
- hdeldar/Persian-Text-llama2-1k-1
pipeline_tag: text-generation
language:
- en
- fa
tags:
- llama
- llama2
- pythorch
---
# 🦙🧠 Persion-Text-llama2-7b-1k-1
📝 [Article](https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32) |
💻 [Colab](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing) |
📄 [Script](https://gist.github.com/mlabonne/b5718e1b229ce6553564e3f56df72c5c)
<center><img src="https://i.imgur.com/1IZmjU4.png" width="300"></center>
This is a [`llama-2-7b-persian-text-1k`](https://huggingface.co/hdeldar/llama-2-7b-persian-text-1k) model fine-tuned using QLoRA (4-bit precision) on the [`hdeldar/Persian-Text-llama2-1k-1`](https://huggingface.co/datasets/hdeldar/Persian-Text-llama2-1k-1) dataset, which is a subset of the [`SeyedAli/Persian-Text-QA`](https://huggingface.co/datasets/SeyedAli/Persian-Text-QA).
## 🔧 Training
It was trained on a Google Colab notebook with a T4 GPU and high RAM. It is mainly designed for educational purposes, not for inference.
## 💻 Usage
``` python
# pip install transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "hdeldar/llama-2-7b-persian-text-1k-1"
prompt = "What is a large language model?"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
f'<s>[INST] {prompt} [/INST]',
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
Output:
> A large language model is trained on massive amounts of text data to understand and generate human language. The model learns by predicting the next word in a sequence based on the context of the previous words. This process allows the language model to learn patterns, rules, and relationships within the language that allow it to generate text that looks and sounds authentic and coherent. These large language models are used for many applications, such as language translation, sentiment analysis, and language generation. These models can also be used to generate text summaries of complex documents, such as legal or scientific papers, or to generate text summaries of social media posts. These models are often used in natural language processing (NLP) and machine learning applications.
> The large language models are trained using a large number of parameters, often in the billions or even in the tens of billions.
|
auhide/chef-gpt | auhide | 2023-09-26T16:50:18Z | 158 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"bg",
"base_model:auhide/chef-gpt-base",
"base_model:finetune:auhide/chef-gpt-base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-04-15T11:28:09Z | ---
language:
- bg
license: mit
inference: false
pipeline_tag: text-generation
base_model: auhide/chef-gpt-base
model-index:
- name: chef-gpt
results: []
---
# chef-gpt
This model is a fine-tuned version of [auhide/chef-gpt-base](https://huggingface.co/auhide/chef-gpt-base). Visit this [website](https://chef-gpt.streamlit.app/) to test it out.
## Model Description
This is GPT-2 pretrained on a custom Bulgarian dataset.
You can find the dataset [here](https://www.kaggle.com/datasets/auhide/bulgarian-recipes-dataset).
The difference between this one and the base version is that this one can also generate recipes based on recipe name.
## Usage
```python
import re
# Using this library to beautifully print the long recipe string.
from pprint import pprint
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer:
MODEL_ID = "auhide/chef-gpt"
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
chef_gpt = AutoModelForCausalLM.from_pretrained(MODEL_ID)
# Prepare the input:
title = "Пиле с ориз"
input_text = f"[TTL]{title}[ING]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
# Generate the text:
output = chef_gpt.generate(input_ids, max_length=150)
recipe = tokenizer.batch_decode(output)[0]
# Get the generated recipe - it is up until the 1st [SEP] token. It includes the ingredients.
recipe = re.findall(r"\[ING\](.+?)\[SEP\]", recipe)[0]
# Format the output text:
recipe = recipe.replace("[ING]", "- ")
recipe = recipe.replace("[EOL]", "\n- ")
recipe = recipe.replace("[REC]", "\n\n")
print("Име на рецепта/Recipe name:")
print(title)
print("\nРецепта/Recipe:")
pprint(recipe)
```
```bash
Име на рецепта/Recipe name:
Пиле с ориз
Рецепта/Recipe:
('- 2 бр. пилешки бутчета\n'
'- 1 кг зеле\n'
'- 1 ч.ч. ориз\n'
'- 1 ч.ч. доматено пюре\n'
'- 1 глава лук\n'
'- олио\n'
'- червен пипер, черен пипер, сол, джоджен, чубрица\n'
'- целина\n'
'\n'
'Бутчетата се сваряват, обезкостяват и месото се накъсва. Лукът се нарязва на '
'полумесеци е се задушава в олио. Прибавя се нарязаното на ивици зеле. Когато '
'зелето омекне се слага оризът, а като стане прозрачен се добавят '
'подправките. Разбърква се добре, полива се с доматеното пюре и 3 ч.ч. от '
'бульона, в който е вряло месото. Оставя се да ври на тих огън около 20-30 '
'минути. Ястието се прехвърля в тава и се пече на 250С докато изври водата.')
``` |
erkam/sg2im-128-bs-32-depth-cc | erkam | 2023-09-26T16:38:48Z | 3 | 0 | diffusers | [
"diffusers",
"sg-to-image",
"scene-graph",
"stable-diffusion",
"stable-diffusion-diffusers",
"lora",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-20T16:07:08Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
tags:
- sg-to-image
- scene-graph
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - erkam/sg2im-128-bs-32-depth-cc
These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the erkam/clevr-full-v5 dataset. You can find some example images in the following.
|
Undi95/SynthiAthena-v2 | Undi95 | 2023-09-26T16:35:25Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-26T15:56:30Z | ---
license: cc-by-nc-4.0
---
Merging of [migtissera/Synthia-13B](https://huggingface.co/migtissera/Synthia-13B) and [IkariDev/Athena-v2](https://huggingface.co/IkariDev/Athena-v2), 50/50.
Made for DarkReaperBoy. |
Nazzyk/a2c-PandaReachDense-v2 | Nazzyk | 2023-09-26T16:34:00Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-26T00:25:40Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.08 +/- 0.49
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687) |
seaweed4/MNIST | seaweed4 | 2023-09-26T16:26:49Z | 0 | 0 | null | [
"image-classification",
"en",
"dataset:mnist",
"region:us"
]
| image-classification | 2023-09-26T16:01:22Z | ---
datasets:
- mnist
language:
- en
metrics:
- accuracy
pipeline_tag: image-classification
--- |
tangjs/uv-sdxl-r32-lr-4e7 | tangjs | 2023-09-26T16:24:23Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-26T08:32:03Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
dataset: None
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - tangjs/uv-sdxl-r32-lr-4e7
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the None dataset. You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
LarryAIDraw/missionarymotion | LarryAIDraw | 2023-09-26T16:18:27Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-26T16:12:21Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/123612/missionary-pov-motion-module-for-animatediff-proof-of-concept |
LarryAIDraw/yoinkoorlabsNSFWMotion_godmodeReal | LarryAIDraw | 2023-09-26T16:13:20Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-26T00:33:11Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/144934/yoinkoorlabs-nsfw-motion-module-v2 |
mindchain/META-LLAMA-LLAMA-2-7B-HF-GGUF | mindchain | 2023-09-26T16:11:49Z | 0 | 2 | null | [
"arxiv:1910.09700",
"region:us"
]
| null | 2023-09-21T13:54:07Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Llama 2
<!-- Provide a quick summary of what the model is/does. -->
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lamurias/a2c-PandaReachDense-v3 | Lamurias | 2023-09-26T16:10:28Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-26T15:33:19Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.88 +/- 1.25
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
danieljova1/football | danieljova1 | 2023-09-26T16:07:30Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-26T16:07:24Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
GHonem/blip-image-captioning-base-test_sagemaker-tops-3 | GHonem | 2023-09-26T16:06:49Z | 60 | 0 | transformers | [
"transformers",
"pytorch",
"blip",
"image-text-to-text",
"generated_from_trainer",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2023-09-26T15:40:15Z | ---
license: bsd-3-clause
tags:
- generated_from_trainer
model-index:
- name: blip-image-captioning-base-test_sagemaker-tops-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# blip-image-captioning-base-test_sagemaker-tops-3
This model is a fine-tuned version of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- distributed_type: sagemaker_model_parallel
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
roa7n/gpt2-human_nontata_promoters-randomized_0_layers_3e-05_lr_2_e | roa7n | 2023-09-26T16:05:25Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-26T16:05:22Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
OpenNLG/OpenBA-V1-Based | OpenNLG | 2023-09-26T16:04:48Z | 29 | 9 | transformers | [
"transformers",
"pytorch",
"openba",
"feature-extraction",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:2309.10706",
"license:apache-2.0",
"region:us"
]
| text-generation | 2023-09-20T05:56:52Z | ---
license: apache-2.0
language:
- zh
- en
tags:
- openba
pipeline_tag: text-generation
---
# Introduction
OpenBA is an Open-Sourced 15B Bilingual Asymmetric Seq2Seq Model Pre-trained from Scratch.
## Open Source Plan
We are excited to unveil two distinguished versions of our model, with another on the horizon:
- [OpenBA-LM](https://huggingface.co/OpenBA/OpenBA-LM): The backbone language models was pre-trained on 340B English, Chinese, and code tokens.
- [OpenBA-Flan](https://huggingface.co/OpenBA/OpenBA-Flan): We perform supervised fine-tuning on the base model with additional 40B tokens using our collected BiFlan Dataset.
- OpenBA-Chat: coming soon
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** zh, en (We also offer the possibility for multilingual learning, by using a multilingual tokenizer.)
- **License:** Apache 2.0
- **Resources for more information:**
- [Paper](https://arxiv.org/abs/2309.10706)
- [GitHub Repo](https://github.com/OpenNLG/OpenBA/)
# Usage
## Install requirements
```bash
pip install transformers torch>=2.0 sentencepiece
```
## Demo usage
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("OpenBA/OpenBA-LM", trust_remote_code=True)
>>> model = AutoModelForSeq2SeqLM.from_pretrained("OpenBA/OpenBA-LM", trust_remote_code=True).half().cuda()
>>> model = model.eval()
>>> query = "<S>" + "苏州处太湖平原,沿江为高沙平原,河" + "<extra_id_0>"
>>> inputs = tokenizer(query, return_tensors="pt").to("cuda")
>>> outputs = model.generate(**inputs, do_sample=True, max_new_tokens=32)
>>> response = tokenizer.decode(outputs[0], skip_special_tokens=True)
>>> print(response)
流两侧为河淤平原,苏州平原是江苏平原主体,地势低平,土地肥沃,气候温和
``` |
eugene6/Reinforce-CartPole-v1 | eugene6 | 2023-09-26T16:04:23Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-26T16:04:12Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
anaReviewsWorks/DtxBlackFunciona | anaReviewsWorks | 2023-09-26T15:58:46Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-09-26T15:47:02Z | Deseja saber se o DTX Black realmente funciona? Este artigo fornecerá informações sobre a eficácia do produto, sua composição, como usá-lo e onde comprá-lo no site oficial.
Como Emagrecer Rapidamente com o DTX Black
Emagrecer não é uma tarefa simples, mas agora você tem um aliado nessa batalha:
o DTX Black. Este suplemento permite eliminar o excesso de peso de maneira descomplicada e eficaz.
Com o DTX Black, você experimentará uma maior sensação de saciedade, terá mais energia para suas atividades diárias e alcançará uma perda de peso natural e segura, sem comprometer sua saúde.
Continue lendo para descobrir como o DTX Black pode ajudá-lo a derreter o tecido adiposo de forma acelerada e conquistar o corpo que deseja.
DTX Black Funciona Mesmo?
Uma pesquisa realizada em 2016 avaliou mulheres que utilizaram o DTX Black por três meses ou mais, comparando-as com aquelas que não usaram nenhum suplemento.
O grupo que usou o DTX Black obteve resultados surpreendentes:
Perda de peso de 10 kg ou mais.
Redução significativa de celulite e inchaço (63% das participantes).
Redução de 40% na sensação de fome.
Melhora no funcionamento do intestino relatada por 90% das mulheres.
Esses resultados foram posteriormente confirmados por outras pesquisas, destacando a eficácia do DTX Black na perda de gordura.
O DTX Black realmente funciona! Ele pode ajudar você a perder mais de 1 kg de gordura por semana, mesmo se sua dieta não for ideal.
Isso acontece porque o suplemento atua diretamente nas células de gordura, liberando ácidos graxos para serem utilizados como energia.
[Clique Aqui Para Comprar DTX Black Com Desconto]
https://bit.ly/DtxBlack-com
https://bit.ly/DtxBlack-com
https://bit.ly/DtxBlack-com |
aidiffusionartist/exmachina420-realistic-v1-5 | aidiffusionartist | 2023-09-26T15:41:44Z | 0 | 0 | null | [
"en",
"license:mit",
"region:us"
]
| null | 2023-09-26T14:33:49Z | ---
license: mit
language:
- en
--- |
airjairj/MODELLO | airjairj | 2023-09-26T15:37:00Z | 91 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-13T16:31:44Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: MODELLO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MODELLO
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1818
- Edit Distance: 13.598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 18
- eval_batch_size: 18
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 18
### Training results
| Training Loss | Epoch | Step | Validation Loss | Edit Distance |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| 0.7351 | 1.0 | 500 | 0.2832 | 13.844 |
| 0.3224 | 2.0 | 1000 | 0.2401 | 13.85 |
| 0.2788 | 3.0 | 1500 | 0.2285 | 13.795 |
| 0.2595 | 4.0 | 2000 | 0.2179 | 13.805 |
| 0.2469 | 5.0 | 2500 | 0.2066 | 13.687 |
| 0.233 | 6.0 | 3000 | 0.1912 | 13.67 |
| 0.219 | 7.0 | 3500 | 0.1874 | 13.658 |
| 0.2135 | 8.0 | 4000 | 0.1895 | 13.65 |
| 0.2101 | 9.0 | 4500 | 0.1883 | 13.643 |
| 0.2074 | 10.0 | 5000 | 0.1836 | 13.643 |
| 0.2057 | 11.0 | 5500 | 0.1825 | 13.649 |
| 0.2042 | 12.0 | 6000 | 0.1834 | 13.614 |
| 0.2034 | 13.0 | 6500 | 0.1828 | 13.623 |
| 0.2017 | 14.0 | 7000 | 0.1820 | 13.653 |
| 0.2017 | 15.0 | 7500 | 0.1824 | 13.634 |
| 0.2004 | 16.0 | 8000 | 0.1822 | 13.641 |
| 0.2006 | 17.0 | 8500 | 0.1817 | 13.62 |
| 0.2005 | 18.0 | 9000 | 0.1818 | 13.598 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Tommert25/multibert2809_flow | Tommert25 | 2023-09-26T15:33:17Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-26T15:21:05Z | ---
license: apache-2.0
base_model: bert-base-multilingual-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: multibert2809_flow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multibert2809_flow
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4534
- Precision: 0.7055
- Recall: 0.7076
- F1: 0.7066
- Accuracy: 0.8709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 118 | 0.5021 | 0.6550 | 0.6229 | 0.6385 | 0.8414 |
| No log | 2.0 | 236 | 0.4534 | 0.7055 | 0.7076 | 0.7066 | 0.8709 |
| No log | 3.0 | 354 | 0.4903 | 0.7455 | 0.7237 | 0.7345 | 0.8752 |
| No log | 4.0 | 472 | 0.5158 | 0.7488 | 0.7327 | 0.7407 | 0.8755 |
| 0.3074 | 5.0 | 590 | 0.5685 | 0.7502 | 0.7434 | 0.7468 | 0.8758 |
| 0.3074 | 6.0 | 708 | 0.5799 | 0.7612 | 0.7530 | 0.7570 | 0.8809 |
| 0.3074 | 7.0 | 826 | 0.6022 | 0.7673 | 0.7494 | 0.7582 | 0.8791 |
| 0.3074 | 8.0 | 944 | 0.6054 | 0.7663 | 0.7554 | 0.7608 | 0.8840 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
roa7n/gpt2-human_nontata_promoters-randomized_0_layers_0.0003_lr_2_e | roa7n | 2023-09-26T15:32:57Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-26T15:32:55Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Takagi-san/SaProt_650M_AF2 | Takagi-san | 2023-09-26T15:24:16Z | 172 | 2 | transformers | [
"transformers",
"pytorch",
"esm",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-26T12:02:40Z | ---
license: mit
---
We provide both huggingface version and
[esm version](https://github.com/facebookresearch/esm) of
SaProt (see our github <https://github.com/SaProt/SaProt>). Users can choose either one to use.
### Huggingface model
The following code shows how to load the model.
```
from transformers import EsmTokenizer, EsmForMaskedLM
model_path = "/your/path/to/SaProt_650M_AF2"
tokenizer = EsmTokenizer.from_pretrained(model_path)
model = EsmForMaskedLM.from_pretrained(model_path)
#################### Example ####################
device = "cuda"
model.to(device)
seq = "MdEvVpQpLrVyQdYaKv"
tokens = tokenizer.tokenize(seq)
print(tokens)
inputs = tokenizer(seq, return_tensors="pt")
inputs = {k: v.to(device) for k, v in inputs.items()}
outputs = model(**inputs)
print(outputs.logits.shape)
"""
['Md', 'Ev', 'Vp', 'Qp', 'Lr', 'Vy', 'Qd', 'Ya', 'Kv']
torch.Size([1, 11, 446])
"""
```
### esm model
The esm version is also stored in the same folder, named `SaProt_650M_AF2.pt`. We provide a function to load the model.
```
from utils.esm_loader import load_esm_saprot
model_path = "/your/path/to/SaProt_650M_AF2.pt"
model, alphabet = load_esm_saprot(model_path)
``` |
traeval/tesla1500_llama2_7b-2-7b | traeval | 2023-09-26T15:20:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-26T15:10:58Z | ***** train metrics *****
epoch = 1.33
total_flos = 14124142GF
train_loss = 0.7836
train_runtime = 1:27:16.97
train_samples_per_second = 0.382
train_steps_per_second = 0.095
{'train_runtime': 5236.9755, 'train_samples_per_second': 0.382, 'train_steps_per_second': 0.095, 'total_flos': 1.5165682398461952e+16, 'train_loss': 0.7835705888271332, 'epoch': 1.33} |
md-nishat-008/Tri-Distil-BERT | md-nishat-008 | 2023-09-26T15:11:27Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"arxiv:2309.10272",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-17T00:12:17Z | ---
license: apache-2.0
---
The model is pretrained on the OSCAR dataset for Bangla, English and Hindi.
The base model is Distil-BERT and the intended use for this model is for the datasets that contain a mix of these languages.
To Cite:
@article{raihan2023mixed,
title={Mixed-Distil-BERT: Code-mixed Language Modeling for Bangla, English, and Hindi},
author={Raihan, Md Nishat and Goswami, Dhiman and Mahmud, Antara},
journal={arXiv preprint arXiv:2309.10272},
year={2023}
} |
VuongQuoc/checkpoints_1_microsoft_deberta_21_9 | VuongQuoc | 2023-09-26T15:11:04Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"multiple-choice",
"generated_from_trainer",
"base_model:VuongQuoc/checkpoints_26_9_microsoft_deberta_21_9",
"base_model:finetune:VuongQuoc/checkpoints_26_9_microsoft_deberta_21_9",
"license:mit",
"endpoints_compatible",
"region:us"
]
| multiple-choice | 2023-09-21T11:35:36Z | ---
license: mit
base_model: VuongQuoc/checkpoints_26_9_microsoft_deberta_21_9
tags:
- generated_from_trainer
model-index:
- name: checkpoints_1_microsoft_deberta_21_9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints_1_microsoft_deberta_21_9
This model is a fine-tuned version of [VuongQuoc/checkpoints_26_9_microsoft_deberta_21_9](https://huggingface.co/VuongQuoc/checkpoints_26_9_microsoft_deberta_21_9) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
therealcyberlord/llama2-qlora-finetuned-medical | therealcyberlord | 2023-09-26T15:10:42Z | 12 | 5 | peft | [
"peft",
"llama",
"llm",
"llama2",
"medical",
"text-generation",
"dataset:BI55/MedText",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
]
| text-generation | 2023-08-05T23:10:14Z | ---
library_name: peft
tags:
- llm
- llama2
- medical
datasets:
- BI55/MedText
pipeline_tag: text-generation
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Llama2 🦙 finetuned on medical diagnosis
MedText dataset: https://huggingface.co/datasets/BI55/MedText
1412 pairs of diagnosis cases
# About:
The primary objective of this fine-tuning process is to equip Llama2 with the ability to assist in diagnosing various medical cases and diseases.
However, it is essential to clarify that it is not designed to replace real medical professionals. Instead, its purpose is to provide helpful information to users,
suggesting potential next steps based on the input data and the patterns it has learned from the MedText dataset.
Finetuned on guanaco styled instructions
```
###Human
###Assistant
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0 |
jpostma/s-DagoBERT-TSDAE | jpostma | 2023-09-26T15:04:03Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-09-03T14:39:21Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {s-DagoBERT-TSDAE}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 90 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 360,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 80, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
roa7n/gpt2-human_nontata_promoters-randomized_0_layers_0.003_lr_2_e | roa7n | 2023-09-26T15:00:21Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-26T14:32:15Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
luisgasco/setfit-sentence-classifier_test_biomed_5it_b16 | luisgasco | 2023-09-26T14:57:20Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-09-26T14:56:51Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# luisgasco/setfit-sentence-classifier_test_biomed_5it_b16
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("luisgasco/setfit-sentence-classifier_test_biomed_5it_b16")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
LeeSolomonson/ppo-LunarLander-v2 | LeeSolomonson | 2023-09-26T14:56:48Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-26T14:56:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.29 +/- 73.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rezaparseh/phi-1_5-finetuned-gsm8k | rezaparseh | 2023-09-26T14:48:24Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
]
| null | 2023-09-26T14:14:04Z | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-gsm8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-gsm8k
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
nguyenlephucvinh2011/llama2-qlora-finetunined | nguyenlephucvinh2011 | 2023-09-26T14:45:57Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-26T14:45:48Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
mehranmehr/ppo-LunarLander-v2 | mehranmehr | 2023-09-26T14:41:44Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-26T14:41:20Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.34 +/- 19.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MightyDuckk/lora-trained-xl-colab | MightyDuckk | 2023-09-26T14:35:03Z | 5 | 2 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-09-26T13:19:14Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - MightyDuckk/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
anders0204/q-FrozenLake-v1-4x4-noSlippery | anders0204 | 2023-09-26T14:31:58Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-26T14:31:56Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="anders0204/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MUmairAB/marian-finetuned-kde4-english-to-french | MUmairAB | 2023-09-26T14:30:30Z | 63 | 1 | transformers | [
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-07-11T15:22:18Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
base_model: Helsinki-NLP/opus-mt-en-fr
model-index:
- name: marian-finetuned-kde4-english-to-french
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-english-to-french
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6794
- Validation Loss: 0.8119
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 29555, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0577 | 0.8929 | 0 |
| 0.8023 | 0.8343 | 1 |
| 0.6794 | 0.8119 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MUmairAB/bert-based-MaskedLM | MUmairAB | 2023-09-26T14:29:12Z | 70 | 1 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-07-08T14:03:10Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
datasets:
- imdb
pipeline_tag: fill-mask
base_model: distilbert-base-uncased
model-index:
- name: MUmairAB/bert-based-MaskedLM
results: []
---
# MUmairAB/bert-based-MaskedLM
**The model training code is available as a notebook on my [GitHub](https://github.com/MUmairAB/Masked-Language-Model-Fine-Tuning-with-HuggingFace-Transformers/tree/main)**
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on [IMDB Movies Review](https://huggingface.co/datasets/imdb) dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4360
- Validation Loss: 2.3284
- Epoch: 20
## Training and validation loss during training
<img src="https://huggingface.co/MUmairAB/bert-based-MaskedLM/resolve/main/Loss%20plot.png" style="height: 432px; width:567px;"/>
## Model description
[DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased)
```
Model: "tf_distil_bert_for_masked_lm"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
distilbert (TFDistilBertMai multiple 66362880
nLayer)
vocab_transform (Dense) multiple 590592
vocab_layer_norm (LayerNorm multiple 1536
alization)
vocab_projector (TFDistilBe multiple 23866170
rtLMHead)
=================================================================
Total params: 66,985,530
Trainable params: 66,985,530
Non-trainable params: 0
_________________________________________________________________
```
## Intended uses & limitations
The model was trained on IMDB movies review dataset. So, it inherits the language biases from the dataset.
## Training and evaluation data
The model was trained on [IMDB Movies Review](https://huggingface.co/datasets/imdb) dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -60, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.0754 | 2.7548 | 0 |
| 2.7969 | 2.6209 | 1 |
| 2.7214 | 2.5588 | 2 |
| 2.6626 | 2.5554 | 3 |
| 2.6466 | 2.4881 | 4 |
| 2.6238 | 2.4775 | 5 |
| 2.5696 | 2.4280 | 6 |
| 2.5504 | 2.3924 | 7 |
| 2.5171 | 2.3725 | 8 |
| 2.5180 | 2.3142 | 9 |
| 2.4443 | 2.2974 | 10 |
| 2.4497 | 2.3317 | 11 |
| 2.4371 | 2.3317 | 12 |
| 2.4377 | 2.3237 | 13 |
| 2.4369 | 2.3338 | 14 |
| 2.4350 | 2.3021 | 15 |
| 2.4267 | 2.3264 | 16 |
| 2.4557 | 2.3280 | 17 |
| 2.4461 | 2.3165 | 18 |
| 2.4360 | 2.3284 | 19 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3 |
MUmairAB/bert-ner | MUmairAB | 2023-09-26T14:28:31Z | 7 | 3 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"named entity recognition",
"bert-base finetuned",
"umair akram",
"en",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-07-05T15:45:06Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- generated_from_keras_callback
- named entity recognition
- bert-base finetuned
- umair akram
datasets:
- conll2003
metrics:
- seqeval
pipeline_tag: token-classification
base_model: bert-base-cased
model-index:
- name: MUmairAB/bert-ner
results: []
---
# MUmairAB/bert-ner
The model training notebook is available on my [GitHub Repo](https://github.com/MUmairAB/BERT-based-NER-using-HuggingFace-Transformers/tree/main).
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on [Cnoll2003](https://huggingface.co/datasets/conll2003) dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0003
- Validation Loss: 0.0880
- Epoch: 19
## How to use this model
```
#Install the transformers library
!pip install transformers
#Import the pipeline
from transformers import pipeline
#Import the model from HuggingFace
checkpoint = "MUmairAB/bert-ner"
model = pipeline(task="token-classification",
model=checkpoint)
#Use the model
raw_text = "My name is umair and i work at Swits AI in Antarctica."
model(raw_text)
```
## Model description
Model: "tf_bert_for_token_classification"
```
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
bert (TFBertMainLayer) multiple 107719680
dropout_37 (Dropout) multiple 0
classifier (Dense) multiple 6921
=================================================================
Total params: 107,726,601
Trainable params: 107,726,601
Non-trainable params: 0
_________________________________________________________________
```
## Intended uses & limitations
This model can be used for named entity recognition tasks. It is trained on [Conll2003](https://huggingface.co/datasets/conll2003) dataset. The model can classify four types of named entities:
1. persons,
2. locations,
3. organizations, and
4. names of miscellaneous entities that do not belong to the previous three groups.
## Training and evaluation data
The model is evaluated on [seqeval](https://github.com/chakki-works/seqeval) metric and the result is as follows:
```
{'LOC': {'precision': 0.9655361050328227,
'recall': 0.9608056614044638,
'f1': 0.9631650750341064,
'number': 1837},
'MISC': {'precision': 0.8789144050104384,
'recall': 0.913232104121475,
'f1': 0.8957446808510638,
'number': 922},
'ORG': {'precision': 0.9075144508670521,
'recall': 0.9366144668158091,
'f1': 0.9218348623853211,
'number': 1341},
'PER': {'precision': 0.962011771000535,
'recall': 0.9761129207383279,
'f1': 0.9690110482349771,
'number': 1842},
'overall_precision': 0.9374068554396423,
'overall_recall': 0.9527095254123191,
'overall_f1': 0.944996244053084,
'overall_accuracy': 0.9864013657502796}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 17560, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1775 | 0.0635 | 0 |
| 0.0470 | 0.0559 | 1 |
| 0.0278 | 0.0603 | 2 |
| 0.0174 | 0.0603 | 3 |
| 0.0124 | 0.0615 | 4 |
| 0.0077 | 0.0722 | 5 |
| 0.0060 | 0.0731 | 6 |
| 0.0038 | 0.0757 | 7 |
| 0.0043 | 0.0731 | 8 |
| 0.0041 | 0.0735 | 9 |
| 0.0019 | 0.0724 | 10 |
| 0.0019 | 0.0786 | 11 |
| 0.0010 | 0.0843 | 12 |
| 0.0008 | 0.0814 | 13 |
| 0.0011 | 0.0867 | 14 |
| 0.0008 | 0.0883 | 15 |
| 0.0005 | 0.0861 | 16 |
| 0.0005 | 0.0869 | 17 |
| 0.0003 | 0.0880 | 18 |
| 0.0003 | 0.0880 | 19 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3 |
dyaminda/pneumonia-classification | dyaminda | 2023-09-26T14:28:22Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-24T03:27:46Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: pneumonia-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pneumonia-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0288
- Accuracy: 0.9923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1574 | 0.99 | 52 | 0.0976 | 0.9726 |
| 0.0643 | 2.0 | 105 | 0.0535 | 0.9845 |
| 0.0189 | 2.99 | 157 | 0.0490 | 0.9821 |
| 0.0208 | 4.0 | 210 | 0.0484 | 0.9881 |
| 0.0096 | 4.95 | 260 | 0.0463 | 0.9881 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
BBBBirdIsTheWord/rl_course_vizdoom_health_gathering_supreme | BBBBirdIsTheWord | 2023-09-26T14:25:37Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-26T14:25:19Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.20 +/- 4.60
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r BBBBirdIsTheWord/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
rayhanozzy/image_classification | rayhanozzy | 2023-09-26T14:14:04Z | 28 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-17T14:13:52Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3383
- Accuracy: 0.5625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 80 | 1.6519 | 0.3312 |
| No log | 2.0 | 160 | 1.4509 | 0.4125 |
| No log | 3.0 | 240 | 1.3641 | 0.5062 |
| No log | 4.0 | 320 | 1.2676 | 0.5875 |
| No log | 5.0 | 400 | 1.2718 | 0.5188 |
| No log | 6.0 | 480 | 1.2250 | 0.5125 |
| 1.2828 | 7.0 | 560 | 1.1933 | 0.55 |
| 1.2828 | 8.0 | 640 | 1.1538 | 0.575 |
| 1.2828 | 9.0 | 720 | 1.2479 | 0.55 |
| 1.2828 | 10.0 | 800 | 1.2487 | 0.575 |
| 1.2828 | 11.0 | 880 | 1.2418 | 0.5938 |
| 1.2828 | 12.0 | 960 | 1.1514 | 0.6062 |
| 0.5147 | 13.0 | 1040 | 1.2563 | 0.5563 |
| 0.5147 | 14.0 | 1120 | 1.2933 | 0.5813 |
| 0.5147 | 15.0 | 1200 | 1.2857 | 0.5813 |
| 0.5147 | 16.0 | 1280 | 1.3044 | 0.575 |
| 0.5147 | 17.0 | 1360 | 1.4134 | 0.5687 |
| 0.5147 | 18.0 | 1440 | 1.3277 | 0.5875 |
| 0.2675 | 19.0 | 1520 | 1.2963 | 0.575 |
| 0.2675 | 20.0 | 1600 | 1.2049 | 0.6125 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
amitraheja82/MarketMailAIFineTuningModel | amitraheja82 | 2023-09-26T14:06:26Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-26T14:06:23Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
BBBBirdIsTheWord/LunarLander-v2_u8 | BBBBirdIsTheWord | 2023-09-26T14:01:53Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-26T13:37:01Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -176.90 +/- 0.00
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'BBBBirdIsTheWord/LunarLander-v2_u8'
'batch_size': 512
'minibatch_size': 128}
```
|
CyberHarem/okusawa_misaki_bangdream | CyberHarem | 2023-09-26T13:58:28Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/okusawa_misaki_bangdream",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-26T13:40:28Z | ---
license: mit
datasets:
- CyberHarem/okusawa_misaki_bangdream
pipeline_tag: text-to-image
tags:
- art
---
# Lora of okusawa_misaki_bangdream
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5760, you need to download `5760/okusawa_misaki_bangdream.pt` as the embedding and `5760/okusawa_misaki_bangdream.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5760**, with the score of 0.991. The trigger words are:
1. `okusawa_misaki_bangdream`
2. `bangs, black_hair, hair_ornament, blue_eyes, hairclip, blush, smile, medium_hair, open_mouth, long_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7200 | 0.981 | [Download](7200/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](7200/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| 6720 | 0.964 | [Download](6720/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](6720/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) |  |  |
| 6240 | 0.973 | [Download](6240/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](6240/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| **5760** | **0.991** | [**Download**](5760/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](5760/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5280 | 0.967 | [Download](5280/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](5280/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4800 | 0.946 | [Download](4800/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](4800/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4320 | 0.987 | [Download](4320/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](4320/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3840 | 0.984 | [Download](3840/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](3840/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3360 | 0.956 | [Download](3360/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](3360/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| 2880 | 0.978 | [Download](2880/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](2880/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2400 | 0.979 | [Download](2400/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](2400/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1920 | 0.958 | [Download](1920/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](1920/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1440 | 0.969 | [Download](1440/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](1440/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 960 | 0.941 | [Download](960/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](960/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](960/previews/nude.png) | [<NSFW, click to see>](960/previews/nude2.png) |  |  |
| 480 | 0.715 | [Download](480/okusawa_misaki_bangdream.zip) |  |  | [<NSFW, click to see>](480/previews/pattern_3.png) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](480/previews/nude.png) | [<NSFW, click to see>](480/previews/nude2.png) |  |  |
|
CyberHarem/zhong_lanzhu_lovelivenijigasakihighschoolidolclub | CyberHarem | 2023-09-26T13:50:55Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/zhong_lanzhu_lovelivenijigasakihighschoolidolclub",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-26T13:39:45Z | ---
license: mit
datasets:
- CyberHarem/zhong_lanzhu_lovelivenijigasakihighschoolidolclub
pipeline_tag: text-to-image
tags:
- art
---
# Lora of zhong_lanzhu_lovelivenijigasakihighschoolidolclub
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 8100, you need to download `8100/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.pt` as the embedding and `8100/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 8100**, with the score of 0.994. The trigger words are:
1. `zhong_lanzhu_lovelivenijigasakihighschoolidolclub`
2. `long_hair, pink_hair, blue_eyes, ahoge, bangs, mole, mole_under_eye, smile, breasts, sidelocks, blush, hair_bun, double_bun`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **8100** | **0.994** | [**Download**](8100/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](8100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8100/previews/nude.png) | [<NSFW, click to see>](8100/previews/nude2.png) |  |  |
| 7560 | 0.983 | [Download](7560/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](7560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7560/previews/nude.png) | [<NSFW, click to see>](7560/previews/nude2.png) |  |  |
| 7020 | 0.981 | [Download](7020/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](7020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7020/previews/nude.png) | [<NSFW, click to see>](7020/previews/nude2.png) |  |  |
| 6480 | 0.977 | [Download](6480/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](6480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6480/previews/nude.png) | [<NSFW, click to see>](6480/previews/nude2.png) |  |  |
| 5940 | 0.970 | [Download](5940/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5400 | 0.962 | [Download](5400/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4860 | 0.965 | [Download](4860/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4860/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4860/previews/nude.png) | [<NSFW, click to see>](4860/previews/nude2.png) |  |  |
| 4320 | 0.978 | [Download](4320/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3780 | 0.988 | [Download](3780/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3780/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) |  |  |
| 3240 | 0.991 | [Download](3240/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](3240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3240/previews/nude.png) | [<NSFW, click to see>](3240/previews/nude2.png) |  |  |
| 2700 | 0.990 | [Download](2700/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2700/previews/nude.png) | [<NSFW, click to see>](2700/previews/nude2.png) |  |  |
| 2160 | 0.981 | [Download](2160/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](2160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2160/previews/nude.png) | [<NSFW, click to see>](2160/previews/nude2.png) |  |  |
| 1620 | 0.955 | [Download](1620/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1620/previews/nude.png) | [<NSFW, click to see>](1620/previews/nude2.png) |  |  |
| 1080 | 0.940 | [Download](1080/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](1080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1080/previews/nude.png) | [<NSFW, click to see>](1080/previews/nude2.png) |  |  |
| 540 | 0.830 | [Download](540/zhong_lanzhu_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  | [<NSFW, click to see>](540/previews/bondage.png) |  |  |  | [<NSFW, click to see>](540/previews/nude.png) | [<NSFW, click to see>](540/previews/nude2.png) |  |  |
|
takumi12/id2pg_pattern2_triple_epoch40 | takumi12 | 2023-09-26T13:45:39Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-26T13:45:32Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
HoangCuongNguyen/Flan-T5-finetuned-cti2 | HoangCuongNguyen | 2023-09-26T13:38:29Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-22T00:45:11Z | ---
language:
- en
pipeline_tag: text2text-generation
license: mit
--- |
tomaarsen/span-marker-bert-base-fewnerd-fine-super | tomaarsen | 2023-09-26T13:33:51Z | 545 | 12 | span-marker | [
"span-marker",
"pytorch",
"safetensors",
"token-classification",
"ner",
"named-entity-recognition",
"generated_from_span_marker_trainer",
"en",
"dataset:DFKI-SLT/few-nerd",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
]
| token-classification | 2023-03-31T07:28:50Z | ---
language:
- en
license: cc-by-sa-4.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
datasets:
- DFKI-SLT/few-nerd
metrics:
- f1
- recall
- precision
pipeline_tag: token-classification
widget:
- text: Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic
to Paris.
example_title: Amelia Earhart
- text: Leonardo di ser Piero da Vinci painted the Mona Lisa based on Italian noblewoman
Lisa del Giocondo.
example_title: Leonardo da Vinci
base_model: bert-base-cased
model-index:
- name: SpanMarker w. bert-base-cased on finegrained, supervised FewNERD by Tom Aarsen
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: finegrained, supervised FewNERD
type: DFKI-SLT/few-nerd
config: supervised
split: test
revision: 2e3e727c63604fbfa2ff4cc5055359c84fe5ef2c
metrics:
- type: f1
value: 0.7053
name: F1
- type: precision
value: 0.7101
name: Precision
- type: recall
value: 0.7005
name: Recall
---
# SpanMarker with bert-base-cased on FewNERD
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd) dataset that can be used for Named Entity Recognition. This SpanMarker model uses [bert-base-cased](https://huggingface.co/bert-base-cased) as the underlying encoder.
## Model Details
### Model Description
- **Model Type:** SpanMarker
- **Encoder:** [bert-base-cased](https://huggingface.co/bert-base-cased)
- **Maximum Sequence Length:** 256 tokens
- **Maximum Entity Length:** 8 words
- **Training Dataset:** [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd)
- **Language:** en
- **License:** cc-by-sa-4.0
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
### Model Labels
| Label | Examples |
|:-----------------------------------------|:---------------------------------------------------------------------------------------------------------|
| art-broadcastprogram | "Street Cents", "Corazones", "The Gale Storm Show : Oh , Susanna" |
| art-film | "Bosch", "L'Atlantide", "Shawshank Redemption" |
| art-music | "Atkinson , Danko and Ford ( with Brockie and Hilton )", "Champion Lover", "Hollywood Studio Symphony" |
| art-other | "Aphrodite of Milos", "Venus de Milo", "The Today Show" |
| art-painting | "Production/Reproduction", "Touit", "Cofiwch Dryweryn" |
| art-writtenart | "Imelda de ' Lambertazzi", "Time", "The Seven Year Itch" |
| building-airport | "Luton Airport", "Newark Liberty International Airport", "Sheremetyevo International Airport" |
| building-hospital | "Hokkaido University Hospital", "Yeungnam University Hospital", "Memorial Sloan-Kettering Cancer Center" |
| building-hotel | "The Standard Hotel", "Radisson Blu Sea Plaza Hotel", "Flamingo Hotel" |
| building-library | "British Library", "Berlin State Library", "Bayerische Staatsbibliothek" |
| building-other | "Communiplex", "Alpha Recording Studios", "Henry Ford Museum" |
| building-restaurant | "Fatburger", "Carnegie Deli", "Trumbull" |
| building-sportsfacility | "Glenn Warner Soccer Facility", "Boston Garden", "Sports Center" |
| building-theater | "Pittsburgh Civic Light Opera", "Sanders Theatre", "National Paris Opera" |
| event-attack/battle/war/militaryconflict | "Easter Offensive", "Vietnam War", "Jurist" |
| event-disaster | "the 1912 North Mount Lyell Disaster", "1693 Sicily earthquake", "1990s North Korean famine" |
| event-election | "March 1898 elections", "1982 Mitcham and Morden by-election", "Elections to the European Parliament" |
| event-other | "Eastwood Scoring Stage", "Union for a Popular Movement", "Masaryk Democratic Movement" |
| event-protest | "French Revolution", "Russian Revolution", "Iranian Constitutional Revolution" |
| event-sportsevent | "National Champions", "World Cup", "Stanley Cup" |
| location-GPE | "Mediterranean Basin", "the Republic of Croatia", "Croatian" |
| location-bodiesofwater | "Atatürk Dam Lake", "Norfolk coast", "Arthur Kill" |
| location-island | "Laccadives", "Staten Island", "new Samsat district" |
| location-mountain | "Salamander Glacier", "Miteirya Ridge", "Ruweisat Ridge" |
| location-other | "Northern City Line", "Victoria line", "Cartuther" |
| location-park | "Gramercy Park", "Painted Desert Community Complex Historic District", "Shenandoah National Park" |
| location-road/railway/highway/transit | "Friern Barnet Road", "Newark-Elizabeth Rail Link", "NJT" |
| organization-company | "Dixy Chicken", "Texas Chicken", "Church 's Chicken" |
| organization-education | "MIT", "Belfast Royal Academy and the Ulster College of Physical Education", "Barnard College" |
| organization-government/governmentagency | "Congregazione dei Nobili", "Diet", "Supreme Court" |
| organization-media/newspaper | "TimeOut Melbourne", "Clash", "Al Jazeera" |
| organization-other | "Defence Sector C", "IAEA", "4th Army" |
| organization-politicalparty | "Shimpotō", "Al Wafa ' Islamic", "Kenseitō" |
| organization-religion | "Jewish", "Christian", "UPCUSA" |
| organization-showorganization | "Lizzy", "Bochumer Symphoniker", "Mr. Mister" |
| organization-sportsleague | "China League One", "First Division", "NHL" |
| organization-sportsteam | "Tottenham", "Arsenal", "Luc Alphand Aventures" |
| other-astronomything | "Zodiac", "Algol", "`` Caput Larvae ''" |
| other-award | "GCON", "Order of the Republic of Guinea and Nigeria", "Grand Commander of the Order of the Niger" |
| other-biologything | "N-terminal lipid", "BAR", "Amphiphysin" |
| other-chemicalthing | "uranium", "carbon dioxide", "sulfur" |
| other-currency | "$", "Travancore Rupee", "lac crore" |
| other-disease | "French Dysentery Epidemic of 1779", "hypothyroidism", "bladder cancer" |
| other-educationaldegree | "Master", "Bachelor", "BSc ( Hons ) in physics" |
| other-god | "El", "Fujin", "Raijin" |
| other-language | "Breton-speaking", "English", "Latin" |
| other-law | "Thirty Years ' Peace", "Leahy–Smith America Invents Act ( AIA", "United States Freedom Support Act" |
| other-livingthing | "insects", "monkeys", "patchouli" |
| other-medical | "Pediatrics", "amitriptyline", "pediatrician" |
| person-actor | "Ellaline Terriss", "Tchéky Karyo", "Edmund Payne" |
| person-artist/author | "George Axelrod", "Gaetano Donizett", "Hicks" |
| person-athlete | "Jaguar", "Neville", "Tozawa" |
| person-director | "Bob Swaim", "Richard Quine", "Frank Darabont" |
| person-other | "Richard Benson", "Holden", "Campbell" |
| person-politician | "William", "Rivière", "Emeric" |
| person-scholar | "Stedman", "Wurdack", "Stalmine" |
| person-soldier | "Helmuth Weidling", "Krukenberg", "Joachim Ziegler" |
| product-airplane | "Luton", "Spey-equipped FGR.2s", "EC135T2 CPDS" |
| product-car | "100EX", "Corvettes - GT1 C6R", "Phantom" |
| product-food | "red grape", "yakiniku", "V. labrusca" |
| product-game | "Airforce Delta", "Hardcore RPG", "Splinter Cell" |
| product-other | "Fairbottom Bobs", "X11", "PDP-1" |
| product-ship | "Congress", "Essex", "HMS `` Chinkara ''" |
| product-software | "AmiPDF", "Apdf", "Wikipedia" |
| product-train | "High Speed Trains", "55022", "Royal Scots Grey" |
| product-weapon | "AR-15 's", "ZU-23-2M Wróbel", "ZU-23-2MR Wróbel II" |
## Uses
### Direct Use
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-base-fewnerd-fine-super")
# Run inference
entities = model.predict("Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris.")
```
### Downstream Use
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
```python
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-base-fewnerd-fine-super")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("tomaarsen/span-marker-bert-base-fewnerd-fine-super-finetuned")
```
</details>
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:--------|:----|
| Sentence length | 1 | 24.4945 | 267 |
| Entities per sentence | 0 | 2.5832 | 88 |
### Training Hyperparameters
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.9.16
- SpanMarker: 1.3.1.dev
- Transformers : 4.29.2
- PyTorch: 2.0.1+cu118
- Datasets: 2.14.3
- Tokenizers: 0.13.2 |
barberry-nut/wing_damselfly | barberry-nut | 2023-09-26T13:30:19Z | 0 | 0 | null | [
"en",
"license:ecl-2.0",
"region:us"
]
| null | 2023-09-26T13:06:06Z | ---
license: ecl-2.0
language:
- en
---
The detectron2 model for recognizing damselfly wings for standard and perching photos |
luisgasco/setfit-sentence-classifier_test | luisgasco | 2023-09-26T13:22:26Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-09-26T11:21:24Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# luisgasco/setfit-sentence-classifier_test
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("luisgasco/setfit-sentence-classifier_test")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
trieudemo11/llama_7b_attrb_cate_4m_18 | trieudemo11 | 2023-09-26T13:20:32Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-26T13:20:15Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
danielpleus/PlattGPT-LLama2ChatHF | danielpleus | 2023-09-26T13:16:21Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-26T13:16:18Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
BUDDYB/we | BUDDYB | 2023-09-26T13:05:47Z | 0 | 1 | null | [
"license:bigscience-openrail-m",
"region:us"
]
| null | 2023-09-26T13:05:47Z | ---
license: bigscience-openrail-m
---
|
CyberHarem/hanazono_tae_bangdream | CyberHarem | 2023-09-26T12:55:05Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/hanazono_tae_bangdream",
"license:mit",
"region:us"
]
| text-to-image | 2023-08-14T14:53:17Z | ---
license: mit
datasets:
- CyberHarem/hanazono_tae_bangdream
pipeline_tag: text-to-image
tags:
- art
---
# Lora of hanazono_tae_bangdream
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 2640, you need to download `2640/hanazono_tae_bangdream.pt` as the embedding and `2640/hanazono_tae_bangdream.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 2640**, with the score of 0.890. The trigger words are:
1. `hanazono_tae_bangdream`
2. `long_hair, green_eyes, bangs, smile, blush, black_hair, brown_hair, hair_between_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6600 | 0.865 | [Download](6600/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) |  |  |
| 6160 | 0.829 | [Download](6160/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6160/previews/nude.png) | [<NSFW, click to see>](6160/previews/nude2.png) |  |  |
| 5720 | 0.875 | [Download](5720/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5280 | 0.875 | [Download](5280/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4840 | 0.877 | [Download](4840/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4840/previews/nude.png) | [<NSFW, click to see>](4840/previews/nude2.png) |  |  |
| 4400 | 0.876 | [Download](4400/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4400/previews/nude.png) | [<NSFW, click to see>](4400/previews/nude2.png) |  |  |
| 3960 | 0.811 | [Download](3960/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) |  |  |
| 3520 | 0.863 | [Download](3520/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3520/previews/nude.png) | [<NSFW, click to see>](3520/previews/nude2.png) |  |  |
| 3080 | 0.838 | [Download](3080/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3080/previews/nude.png) | [<NSFW, click to see>](3080/previews/nude2.png) |  |  |
| **2640** | **0.890** | [**Download**](2640/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2640/previews/nude.png) | [<NSFW, click to see>](2640/previews/nude2.png) |  |  |
| 2200 | 0.892 | [Download](2200/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2200/previews/nude.png) | [<NSFW, click to see>](2200/previews/nude2.png) |  |  |
| 1760 | 0.871 | [Download](1760/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1760/previews/nude.png) | [<NSFW, click to see>](1760/previews/nude2.png) |  |  |
| 1320 | 0.820 | [Download](1320/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1320/previews/nude.png) | [<NSFW, click to see>](1320/previews/nude2.png) |  |  |
| 880 | 0.820 | [Download](880/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](880/previews/nude.png) | [<NSFW, click to see>](880/previews/nude2.png) |  |  |
| 440 | 0.791 | [Download](440/hanazono_tae_bangdream.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](440/previews/nude.png) | [<NSFW, click to see>](440/previews/nude2.png) |  |  |
|
fsarab/ppo-LunarLander-v2 | fsarab | 2023-09-26T12:55:00Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-26T12:54:33Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.98 +/- 19.51
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jh1517/taxi_q_learning | jh1517 | 2023-09-26T12:36:46Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-26T12:36:06Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_q_learning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jh1517/taxi_q_learning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
aolei/llm-chatglm2-ft | aolei | 2023-09-26T12:31:30Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"qwen",
"feature-extraction",
"custom_code",
"region:us"
]
| feature-extraction | 2023-09-20T05:53:38Z |
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("aolei/llm-chatglm2-ft", trust_remote_code=True)
tokenizer.padding_side='left'
model = AutoModel.from_pretrained("LLaMA-Efficient-Tuning/t1_export", trust_remote_code=True).half().cuda()
model = model.eval()
response, history = model.chat(tokenizer, "给我一个折线图", history=[])
print(response, history)
|
milaidy/danielll | milaidy | 2023-09-26T12:24:24Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-26T12:20:30Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### danielll Dreambooth model trained by milaidy with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
lothritz/Lb_mBERT | lothritz | 2023-09-26T12:02:35Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-07-07T09:08:52Z | # Lb_mBERT
Lb_mBERT is a BERT-like language model for the Luxembourgish language.
We used the weights of the multilingual BERT (mBERT) language model as a starting point and continued pre-training it on the MLM task using the same corpus that we used for our LuxemBERT model (https://huggingface.co/lothritz/LuxemBERT).
We achieved higher performances on some downstream tasks than the original LuxemBERT, and another Luxembourgish BERT model called DA BERT (https://huggingface.co/iolariu/DA_BERT).
If you would like to know more about our work, the pre-training corpus, or use our models or datasets, please check out/cite the following papers:
```
@inproceedings{lothritz-etal-2022-luxembert,
title = "{L}uxem{BERT}: Simple and Practical Data Augmentation in Language Model Pre-Training for {L}uxembourgish",
author = "Lothritz, Cedric and
Lebichot, Bertrand and
Allix, Kevin and
Veiber, Lisa and
Bissyande, Tegawende and
Klein, Jacques and
Boytsov, Andrey and
Lefebvre, Cl{\'e}ment and
Goujon, Anne",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.543",
pages = "5080--5089",
abstract = "Pre-trained Language Models such as BERT have become ubiquitous in NLP where they have achieved state-of-the-art performance in most NLP tasks. While these models are readily available for English and other widely spoken languages, they remain scarce for low-resource languages such as Luxembourgish. In this paper, we present LuxemBERT, a BERT model for the Luxembourgish language that we create using the following approach: we augment the pre-training dataset by considering text data from a closely related language that we partially translate using a simple and straightforward method. We are then able to produce the LuxemBERT model, which we show to be effective for various NLP tasks: it outperforms a simple baseline built with the available Luxembourgish text data as well the multilingual mBERT model, which is currently the only option for transformer-based language models in Luxembourgish. Furthermore, we present datasets for various downstream NLP tasks that we created for this study and will make available to researchers on request.",
}
```
```
@inproceedings{lothritz2023comparing,
title={Comparing Pre-Training Schemes for Luxembourgish BERT Models},
author={Lothritz, Cedric and Ezzini, Saad and Purschke, Christoph and Bissyande, Tegawend{\'e} Fran{\c{c}}ois D Assise and Klein, Jacques and Olariu, Isabella and Boytsov, Andrey and Lefebvre, Clement and Goujon, Anne},
booktitle={Proceedings of the 19th Conference on Natural Language Processing (KONVENS 2023)},
year={2023}
}
``` |
yuliang555/my_awesome_wnut_model | yuliang555 | 2023-09-26T12:00:25Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-26T11:36:52Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.0
- name: Recall
type: recall
value: 0.0
- name: F1
type: f1
value: 0.0
- name: Accuracy
type: accuracy
value: 0.9256551665170365
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3274
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 54 | 0.3564 | 0.0 | 0.0 | 0.0 | 0.9256 |
| No log | 2.0 | 108 | 0.3274 | 0.0 | 0.0 | 0.0 | 0.9257 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
lothritz/Lb_GottBERT | lothritz | 2023-09-26T12:00:16Z | 181 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-07-06T11:33:33Z | # Lb_GottBERT
Lb_GottBERT is a BERT-like language model for the Luxembourgish language.
We used the weights of the German GottBERT language model as a starting point and continued pre-training it on the MLM task using the same corpus that we used for our LuxemBERT model (https://huggingface.co/lothritz/LuxemBERT).
We achieved higher performances on several downstream tasks than the original LuxemBERT, DA BERT (https://huggingface.co/iolariu/DA_BERT), and its "sister" model Lb_mBERT (https://huggingface.co/lothritz/Lb_mBERT).
If you would like to know more about our work, the pre-training corpus, or use our models or datasets, please check out /cite the following papers:
```
@inproceedings{lothritz-etal-2022-luxembert,
title = "{L}uxem{BERT}: Simple and Practical Data Augmentation in Language Model Pre-Training for {L}uxembourgish",
author = "Lothritz, Cedric and
Lebichot, Bertrand and
Allix, Kevin and
Veiber, Lisa and
Bissyande, Tegawende and
Klein, Jacques and
Boytsov, Andrey and
Lefebvre, Cl{\'e}ment and
Goujon, Anne",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.543",
pages = "5080--5089",
abstract = "Pre-trained Language Models such as BERT have become ubiquitous in NLP where they have achieved state-of-the-art performance in most NLP tasks. While these models are readily available for English and other widely spoken languages, they remain scarce for low-resource languages such as Luxembourgish. In this paper, we present LuxemBERT, a BERT model for the Luxembourgish language that we create using the following approach: we augment the pre-training dataset by considering text data from a closely related language that we partially translate using a simple and straightforward method. We are then able to produce the LuxemBERT model, which we show to be effective for various NLP tasks: it outperforms a simple baseline built with the available Luxembourgish text data as well the multilingual mBERT model, which is currently the only option for transformer-based language models in Luxembourgish. Furthermore, we present datasets for various downstream NLP tasks that we created for this study and will make available to researchers on request.",
}
```
```
@inproceedings{lothritz2023comparing,
title={Comparing Pre-Training Schemes for Luxembourgish BERT Models},
author={Lothritz, Cedric and Ezzini, Saad and Purschke, Christoph and Bissyande, Tegawend{\'e} Fran{\c{c}}ois D Assise and Klein, Jacques and Olariu, Isabella and Boytsov, Andrey and Lefebvre, Clement and Goujon, Anne},
booktitle={Proceedings of the 19th Conference on Natural Language Processing (KONVENS 2023)},
year={2023}
}
``` |
bedus-creation/mBart-small-dataset-ii-lim-to-eng-002 | bedus-creation | 2023-09-26T11:50:49Z | 4 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-25T14:20:19Z | ---
license: apache-2.0
base_model: mBart
tags:
- generated_from_keras_callback
model-index:
- name: bedus-creation/t5-small-dataset-ii-lim-to-eng-002
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bedus-creation/t5-small-dataset-ii-lim-to-eng-002
This model is a fine-tuned version of [mBart](https://huggingface.co/mBart) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2514
- Validation Loss: 0.3001
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0068 | 0.4628 | 0 |
| 0.4954 | 0.3665 | 1 |
| 0.4239 | 0.3488 | 2 |
| 0.3989 | 0.3300 | 3 |
| 0.3810 | 0.3232 | 4 |
| 0.3678 | 0.3192 | 5 |
| 0.3601 | 0.3140 | 6 |
| 0.3523 | 0.3110 | 7 |
| 0.3461 | 0.3099 | 8 |
| 0.3426 | 0.3074 | 9 |
| 0.3385 | 0.3055 | 10 |
| 0.3347 | 0.3019 | 11 |
| 0.3316 | 0.3036 | 12 |
| 0.3284 | 0.2997 | 13 |
| 0.3253 | 0.2983 | 14 |
| 0.3230 | 0.3004 | 15 |
| 0.3204 | 0.2977 | 16 |
| 0.3191 | 0.2957 | 17 |
| 0.3161 | 0.2931 | 18 |
| 0.3150 | 0.2925 | 19 |
| 0.3131 | 0.2921 | 20 |
| 0.3114 | 0.2909 | 21 |
| 0.3088 | 0.2925 | 22 |
| 0.3081 | 0.2922 | 23 |
| 0.3071 | 0.2894 | 24 |
| 0.3057 | 0.2889 | 25 |
| 0.3030 | 0.2898 | 26 |
| 0.3032 | 0.2884 | 27 |
| 0.3018 | 0.2873 | 28 |
| 0.2995 | 0.2887 | 29 |
| 0.3000 | 0.2864 | 30 |
| 0.2986 | 0.2868 | 31 |
| 0.2981 | 0.2854 | 32 |
| 0.2965 | 0.2867 | 33 |
| 0.2953 | 0.2862 | 34 |
| 0.2959 | 0.2848 | 35 |
| 0.2941 | 0.2849 | 36 |
| 0.2933 | 0.2867 | 37 |
| 0.2925 | 0.2875 | 38 |
| 0.2905 | 0.2843 | 39 |
| 0.2911 | 0.2843 | 40 |
| 0.2897 | 0.2863 | 41 |
| 0.2888 | 0.2855 | 42 |
| 0.2875 | 0.2852 | 43 |
| 0.2884 | 0.2878 | 44 |
| 0.2868 | 0.2853 | 45 |
| 0.2855 | 0.2843 | 46 |
| 0.2846 | 0.2852 | 47 |
| 0.2844 | 0.2833 | 48 |
| 0.2834 | 0.2847 | 49 |
| 0.2831 | 0.2851 | 50 |
| 0.2818 | 0.2839 | 51 |
| 0.2821 | 0.2843 | 52 |
| 0.2798 | 0.2858 | 53 |
| 0.2801 | 0.2843 | 54 |
| 0.2798 | 0.2851 | 55 |
| 0.2785 | 0.2880 | 56 |
| 0.2790 | 0.2853 | 57 |
| 0.2775 | 0.2860 | 58 |
| 0.2776 | 0.2848 | 59 |
| 0.2766 | 0.2875 | 60 |
| 0.2758 | 0.2864 | 61 |
| 0.2753 | 0.2857 | 62 |
| 0.2741 | 0.2899 | 63 |
| 0.2731 | 0.2904 | 64 |
| 0.2728 | 0.2887 | 65 |
| 0.2728 | 0.2879 | 66 |
| 0.2714 | 0.2877 | 67 |
| 0.2715 | 0.2901 | 68 |
| 0.2704 | 0.2864 | 69 |
| 0.2705 | 0.2876 | 70 |
| 0.2694 | 0.2925 | 71 |
| 0.2683 | 0.2923 | 72 |
| 0.2668 | 0.2910 | 73 |
| 0.2676 | 0.2878 | 74 |
| 0.2666 | 0.2928 | 75 |
| 0.2656 | 0.2903 | 76 |
| 0.2649 | 0.2913 | 77 |
| 0.2642 | 0.2912 | 78 |
| 0.2643 | 0.2944 | 79 |
| 0.2636 | 0.2910 | 80 |
| 0.2631 | 0.2922 | 81 |
| 0.2625 | 0.2983 | 82 |
| 0.2617 | 0.2945 | 83 |
| 0.2609 | 0.2914 | 84 |
| 0.2609 | 0.2974 | 85 |
| 0.2594 | 0.2960 | 86 |
| 0.2597 | 0.2977 | 87 |
| 0.2589 | 0.2972 | 88 |
| 0.2583 | 0.2970 | 89 |
| 0.2562 | 0.2951 | 90 |
| 0.2565 | 0.3004 | 91 |
| 0.2556 | 0.2971 | 92 |
| 0.2555 | 0.2963 | 93 |
| 0.2541 | 0.2991 | 94 |
| 0.2548 | 0.3000 | 95 |
| 0.2540 | 0.3015 | 96 |
| 0.2527 | 0.3004 | 97 |
| 0.2528 | 0.3012 | 98 |
| 0.2514 | 0.3001 | 99 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CyberHarem/mifune_shioriko_lovelivenijigasakihighschoolidolclub | CyberHarem | 2023-09-26T11:47:22Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/mifune_shioriko_lovelivenijigasakihighschoolidolclub",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-26T11:30:49Z | ---
license: mit
datasets:
- CyberHarem/mifune_shioriko_lovelivenijigasakihighschoolidolclub
pipeline_tag: text-to-image
tags:
- art
---
# Lora of mifune_shioriko_lovelivenijigasakihighschoolidolclub
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5720, you need to download `5720/mifune_shioriko_lovelivenijigasakihighschoolidolclub.pt` as the embedding and `5720/mifune_shioriko_lovelivenijigasakihighschoolidolclub.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5720**, with the score of 0.994. The trigger words are:
1. `mifune_shioriko_lovelivenijigasakihighschoolidolclub`
2. `bangs, short_hair, black_hair, red_eyes, ribbon, dark_green_hair, fang, orange_eyes, swept_bangs, hair_ribbon, blush, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.987 | [Download](7800/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/bikini.png) | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7280 | 0.994 | [Download](7280/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/bikini.png) | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.992 | [Download](6760/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/bikini.png) | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.991 | [Download](6240/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bikini.png) | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| **5720** | **0.994** | [**Download**](5720/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bikini.png) | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.990 | [Download](5200/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bikini.png) | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.994 | [Download](4680/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/bikini.png) | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.990 | [Download](4160/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/bikini.png) | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.993 | [Download](3640/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/bikini.png) | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.991 | [Download](3120/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/bikini.png) | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.978 | [Download](2600/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/bikini.png) | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.988 | [Download](2080/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/bikini.png) | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.989 | [Download](1560/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/bikini.png) | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.991 | [Download](1040/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/bikini.png) | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.918 | [Download](520/mifune_shioriko_lovelivenijigasakihighschoolidolclub.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/bikini.png) | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
diffusers/consistency_models | diffusers | 2023-09-26T11:46:08Z | 0 | 0 | diffusers | [
"diffusers",
"region:us"
]
| null | 2023-07-05T14:58:05Z | ---
duplicated_from: ayushtues/consistency_models
---
|
openai/diffusers-cd_imagenet64_lpips | openai | 2023-09-26T11:45:49Z | 56 | 1 | diffusers | [
"diffusers",
"safetensors",
"generative model",
"unconditional image generation",
"consistency-model",
"arxiv:2303.01469",
"arxiv:2206.00364",
"arxiv:1506.03365",
"arxiv:1512.00567",
"license:mit",
"diffusers:ConsistencyModelPipeline",
"region:us"
]
| null | 2023-07-05T13:28:56Z | ---
license: mit
tags:
- generative model
- unconditional image generation
- consistency-model
---
**Disclaimer**: This model was added by the amazing community contributors [dg845](https://huggingface.co/dg845) and [ayushtues](https://huggingface.co/ayushtues)❤️
Consistency models are a new class of generative models introduced in ["Consistency Models"](https://arxiv.org/abs/2303.01469) ([paper](https://arxiv.org/pdf/2303.01469.pdf), [code](https://github.com/openai/consistency_models)) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever.
From the paper abstract:
> Diffusion models have significantly advanced the fields of image, audio, and video generation, but
they depend on an iterative sampling process that causes slow generation. To overcome this limitation,
we propose consistency models, a new family of models that generate high quality samples by directly
mapping noise to data. They support fast one-step generation by design, while still allowing multistep
sampling to trade compute for sample quality. They also support zero-shot data editing, such as image
inpainting, colorization, and super-resolution, without requiring explicit training on these tasks.
Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone
generative models altogether. Through extensive experiments, we demonstrate that they outperform
existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new
state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64 x 64 for one-step generation. When
trained in isolation, consistency models become a new family of generative models that can outperform
existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet
64 x 64 and LSUN 256 x 256.
Intuitively, a consistency model can be thought of as a model which, when evaluated on a noisy image and timestep, returns an output image sample similar to that which would be returned by running a sampling algorithm on a diffusion model.
Consistency models can be parameterized by any neural network whose input has the same dimensionality as its output, such as a U-Net.
More precisely, given a teacher diffusion model and fixed sampler, we can train ("distill") a consistency model such that when it is given a noisy image and its corresponding timestep, the output sample of the consistency model will be close to the output that would result by using the sampler on the diffusion model to produce a sample, starting at the same noisy image and timestep.
The authors call this procedure "consistency distillation (CD)".
Consistency models can also be trained from scratch to generate clean images from a noisy image and timestep, which the authors call "consistency training (CT)".
This model is a `diffusers`-compatible version of the [cd_imagenet64_lpips.pt](https://github.com/openai/consistency_models#pre-trained-models) checkpont from the [original code and model release](https://github.com/openai/consistency_models).
This model was distilled (via consistency distillation (CD)) from an [EDM model](https://arxiv.org/pdf/2206.00364.pdf) trained on the ImageNet 64x64 dataset, using [LPIPS](https://richzhang.github.io/PerceptualSimilarity/) as the measure of closeness.
See the [original model card](https://github.com/openai/consistency_models/blob/main/model-card.md) for more information.
## Download
The original PyTorch model checkpoint can be downloaded from the [original code and model release](https://github.com/openai/consistency_models#pre-trained-models).
The `diffusers` pipeline for the `cd-imagenet64-lpips` model can be downloaded as follows:
```python
from diffusers import ConsistencyModelPipeline
pipe = ConsistencyModelPipeline.from_pretrained("openai/diffusers-cd_imagenet64_lpips")
```
## Usage
The original model checkpoint can be used with the [original consistency models codebase](https://github.com/openai/consistency_models).
Here is an example of using the `cd_imagenet64_lpips` checkpoint with `diffusers`:
```python
import torch
from diffusers import ConsistencyModelPipeline
device = "cuda"
# Load the cd_imagenet64_lpips checkpoint.
model_id_or_path = "openai/diffusers-cd_imagenet64_lpips"
pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)
# Onestep Sampling
image = pipe(num_inference_steps=1).images[0]
image.save("cd_imagenet64_lpips_onestep_sample.png")
# Onestep sampling, class-conditional image generation
# ImageNet-64 class label 145 corresponds to king penguins
image = pipe(num_inference_steps=1, class_labels=145).images[0]
image.save("cd_imagenet64_lpips_onestep_sample_penguin.png")
# Multistep sampling, class-conditional image generation
# Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo:
# https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L74
image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0]
image.save("cd_imagenet64_lpips_multistep_sample_penguin.png")
```
## Model Details
- **Model type:** Consistency model unconditional image generation model, distilled from a diffusion model
- **Dataset:** ImageNet 64x64
- **License:** MIT
- **Model Description:** This model performs unconditional image generation. Its main component is a U-Net, which parameterizes the consistency model. This model was distilled by the Consistency Model authors from an EDM diffusion model, also originally trained by the authors.
- **Resources for more information:**: [Paper](https://arxiv.org/abs/2303.01469), [GitHub Repository](https://github.com/openai/consistency_models), [Original Model Card](/openai/consistency_models/blob/main/model-card.md)
## Datasets
_Note: This section is taken from the ["Datasets" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#datasets)_.
The models that we are making available have been trained on the [ILSVRC 2012 subset of ImageNet](http://www.image-net.org/challenges/LSVRC/2012/) or on individual categories from [LSUN](https://arxiv.org/abs/1506.03365). Here we outline the characteristics of these datasets that influence the behavior of the models:
**ILSVRC 2012 subset of ImageNet**: This dataset was curated in 2012 and has around a million pictures, each of which belongs to one of 1,000 categories. A significant number of the categories in this dataset are animals, plants, and other naturally occurring objects. Although many photographs include humans, these humans are typically not represented by the class label (for example, the category "Tench, tinca tinca" includes many photographs of individuals holding fish).
**LSUN**: This dataset was collected in 2015 by a combination of human labeling via Amazon Mechanical Turk and automated data labeling. Both classes that we consider have more than a million images. The dataset creators discovered that when assessed by trained experts, the label accuracy was approximately 90% throughout the entire LSUN dataset. The pictures are gathered from the internet, and those in the cat class often follow a "meme" format. Occasionally, people, including faces, appear in these photographs.
## Performance
_Note: This section is taken from the ["Performance" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#performance)_.
These models are intended to generate samples consistent with their training distributions.
This has been measured in terms of FID, Inception Score, Precision, and Recall.
These metrics all rely on the representations of a [pre-trained Inception-V3 model](https://arxiv.org/abs/1512.00567),
which was trained on ImageNet, and so is likely to focus more on the ImageNet classes (such as animals) than on other visual features (such as human faces).
## Intended Use
_Note: This section is taken from the ["Intended Use" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#intended-use)_.
These models are intended to be used for research purposes only. In particular, they can be used as a baseline for generative modeling research, or as a starting point for advancing such research. These models are not intended to be commercially deployed. Additionally, they are not intended to be used to create propaganda or offensive imagery.
## Limitations
_Note: This section is taken from the ["Limitations" section of the original model card](https://github.com/openai/consistency_models/blob/main/model-card.md#limitations)_.
These models sometimes produce highly unrealistic outputs, particularly when generating images containing human faces.
This may stem from ImageNet's emphasis on non-human objects.
In consistency distillation and training, minimizing LPIPS results in better sample quality, as evidenced by improved FID and Inception scores. However, it also carries the risk of overestimating model performance, because LPIPS uses a VGG network pre-trained on ImageNet, while FID and Inception scores also rely on convolutional neural networks (the Inception network in particular) pre-trained on the same ImageNet dataset. Although these two convolutional neural networks do not share the same architecture and we extract latents from them in substantially different ways, knowledge leakage is still plausible which can undermine the fidelity of FID and Inception scores.
Because ImageNet and LSUN contain images from the internet, they include photos of real people, and the model may have memorized some of the information contained in these photos. However, these images are already publicly available, and existing generative models trained on ImageNet have not demonstrated significant leakage of this information.
|
colab086/mid | colab086 | 2023-09-26T11:28:48Z | 0 | 0 | null | [
"en",
"license:openrail",
"region:us"
]
| null | 2023-09-26T11:24:24Z | ---
license: openrail
language:
- en
--- |
IlyaGusev/saiga2_13b_gguf | IlyaGusev | 2023-09-26T11:27:58Z | 272 | 47 | null | [
"gguf",
"conversational",
"ru",
"dataset:IlyaGusev/ru_turbo_alpaca",
"dataset:IlyaGusev/ru_turbo_saiga",
"dataset:IlyaGusev/ru_sharegpt_cleaned",
"dataset:IlyaGusev/oasst1_ru_main_branch",
"dataset:IlyaGusev/ru_turbo_alpaca_evol_instruct",
"dataset:lksy/ru_instruct_gpt4",
"license:llama2",
"region:us"
]
| text-generation | 2023-07-26T01:09:47Z | ---
datasets:
- IlyaGusev/ru_turbo_alpaca
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/ru_turbo_alpaca_evol_instruct
- lksy/ru_instruct_gpt4
language:
- ru
inference: false
pipeline_tag: conversational
license: llama2
---
Llama.cpp compatible versions of an original [13B model](https://huggingface.co/IlyaGusev/saiga2_13b_lora).
Download one of the versions, for example `model-q4_K.gguf`.
```
wget https://huggingface.co/IlyaGusev/saiga2_13b_gguf/resolve/main/model-q4_K.gguf
```
Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
```
wget https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py
```
How to run:
```
pip install llama-cpp-python fire
python3 interact_llamacpp.py model-q4_K.gguf
```
System requirements:
* 18GB RAM for q8_K
* 10GB RAM for q4_K
|
mindchain/ops | mindchain | 2023-09-26T11:26:50Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-26T10:52:04Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: gptq
- bits: 4
- tokenizer: None
- dataset: None
- group_size: 128
- damp_percent: 0.01
- desc_act: False
- sym: True
- true_sequential: True
- use_cuda_fp16: False
- model_seqlen: None
- block_name_to_quantize: None
- module_name_preceding_first_block: None
- batch_size: 1
- pad_token_id: None
- disable_exllama: True
### Framework versions
- PEFT 0.5.0
|
milaidy/dcaa | milaidy | 2023-09-26T11:19:34Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-26T11:15:05Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### dcaa Dreambooth model trained by milaidy with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
s3nh/R136a1-MythoMax-L2-13B-exl2-GGUF | s3nh | 2023-09-26T11:14:48Z | 0 | 1 | transformers | [
"transformers",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-26T11:14:48Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
ldos/text_shortening_model_v56 | ldos | 2023-09-26T11:12:06Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-26T09:38:18Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_shortening_model_v56
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v56
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2446
- Rouge1: 0.3315
- Rouge2: 0.1705
- Rougel: 0.302
- Rougelsum: 0.302
- Bert precision: 0.8254
- Bert recall: 0.8322
- Average word count: 7.3374
- Max word count: 18
- Min word count: 2
- Average token count: 11.3745
- % shortened texts with length > 12: 4.7763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 3.2947 | 1.0 | 288 | 2.7198 | 0.2581 | 0.1248 | 0.2329 | 0.2328 | 0.7592 | 0.7746 | 8.0751 | 18 | 0 | 13.4678 | 12.5095 |
| 2.8745 | 2.0 | 576 | 2.5497 | 0.2967 | 0.148 | 0.2692 | 0.269 | 0.8107 | 0.8193 | 7.7149 | 18 | 0 | 11.8552 | 8.3397 |
| 2.7549 | 3.0 | 864 | 2.4721 | 0.31 | 0.1548 | 0.2806 | 0.2805 | 0.8158 | 0.8247 | 7.7263 | 18 | 0 | 11.7786 | 6.975 |
| 2.6785 | 4.0 | 1152 | 2.4212 | 0.3135 | 0.1582 | 0.2834 | 0.2837 | 0.8185 | 0.8264 | 7.5815 | 18 | 0 | 11.6005 | 6.3685 |
| 2.6289 | 5.0 | 1440 | 2.3872 | 0.3188 | 0.1622 | 0.2879 | 0.2882 | 0.8196 | 0.8278 | 7.602 | 18 | 0 | 11.6497 | 6.5959 |
| 2.587 | 6.0 | 1728 | 2.3611 | 0.3224 | 0.1633 | 0.2909 | 0.2911 | 0.8202 | 0.8291 | 7.6232 | 18 | 0 | 11.6694 | 6.5959 |
| 2.5615 | 7.0 | 2016 | 2.3401 | 0.3284 | 0.168 | 0.297 | 0.2972 | 0.8222 | 0.8303 | 7.4936 | 18 | 0 | 11.5299 | 5.8378 |
| 2.5354 | 8.0 | 2304 | 2.3223 | 0.3299 | 0.1703 | 0.299 | 0.299 | 0.8228 | 0.831 | 7.5171 | 18 | 0 | 11.5519 | 5.9136 |
| 2.5074 | 9.0 | 2592 | 2.3069 | 0.3314 | 0.1702 | 0.2999 | 0.3 | 0.8237 | 0.832 | 7.5383 | 18 | 2 | 11.5595 | 5.8378 |
| 2.4868 | 10.0 | 2880 | 2.2944 | 0.3317 | 0.1713 | 0.3014 | 0.3013 | 0.8246 | 0.8317 | 7.4193 | 18 | 2 | 11.4519 | 5.5345 |
| 2.4773 | 11.0 | 3168 | 2.2830 | 0.3322 | 0.1705 | 0.3013 | 0.3013 | 0.8247 | 0.8319 | 7.3904 | 18 | 2 | 11.4238 | 5.0038 |
| 2.4571 | 12.0 | 3456 | 2.2738 | 0.3288 | 0.1685 | 0.2987 | 0.2987 | 0.8242 | 0.831 | 7.3343 | 18 | 2 | 11.3715 | 4.5489 |
| 2.4494 | 13.0 | 3744 | 2.2672 | 0.3322 | 0.1705 | 0.3013 | 0.3014 | 0.8251 | 0.8319 | 7.3351 | 18 | 2 | 11.3798 | 4.5489 |
| 2.4401 | 14.0 | 4032 | 2.2611 | 0.33 | 0.1692 | 0.3004 | 0.3005 | 0.8246 | 0.8315 | 7.3639 | 18 | 2 | 11.4139 | 4.8522 |
| 2.431 | 15.0 | 4320 | 2.2564 | 0.3303 | 0.1698 | 0.3004 | 0.3004 | 0.8248 | 0.8317 | 7.3745 | 18 | 2 | 11.4238 | 5.0796 |
| 2.4253 | 16.0 | 4608 | 2.2522 | 0.3308 | 0.1704 | 0.3016 | 0.3014 | 0.8252 | 0.8319 | 7.3328 | 18 | 2 | 11.3791 | 4.8522 |
| 2.4111 | 17.0 | 4896 | 2.2490 | 0.3313 | 0.1705 | 0.3017 | 0.3017 | 0.8254 | 0.8319 | 7.3222 | 18 | 2 | 11.3563 | 4.8522 |
| 2.4125 | 18.0 | 5184 | 2.2464 | 0.3313 | 0.1702 | 0.3017 | 0.3017 | 0.8254 | 0.8321 | 7.3328 | 18 | 2 | 11.3654 | 4.8522 |
| 2.4061 | 19.0 | 5472 | 2.2450 | 0.3313 | 0.1701 | 0.3017 | 0.3018 | 0.8254 | 0.8321 | 7.3359 | 18 | 2 | 11.3723 | 4.7763 |
| 2.4129 | 20.0 | 5760 | 2.2446 | 0.3315 | 0.1705 | 0.302 | 0.302 | 0.8254 | 0.8322 | 7.3374 | 18 | 2 | 11.3745 | 4.7763 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
learn3r/longt5_xl_summ_screen_bp_only_30 | learn3r | 2023-09-26T11:07:19Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:learn3r/summ_screen_fd_bp",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-22T21:21:08Z | ---
base_model: /exports/eddie/scratch/s1970716/models/summarization/longt5_xl_summ_screen_bp_only/checkpoint-210
tags:
- generated_from_trainer
datasets:
- learn3r/summ_screen_fd_bp
metrics:
- rouge
model-index:
- name: longt5_xl_summ_screen_bp_only_30
results:
- task:
name: Summarization
type: summarization
dataset:
name: learn3r/summ_screen_fd_bp
type: learn3r/summ_screen_fd_bp
metrics:
- name: Rouge1
type: rouge
value: 40.4388
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5_xl_summ_screen_bp_only_30
This model is a fine-tuned version of [/exports/eddie/scratch/s1970716/models/summarization/longt5_xl_summ_screen_bp_only/checkpoint-210](https://huggingface.co//exports/eddie/scratch/s1970716/models/summarization/longt5_xl_summ_screen_bp_only/checkpoint-210) on the learn3r/summ_screen_fd_bp dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2376
- Rouge1: 40.4388
- Rouge2: 16.4662
- Rougel: 28.0771
- Rougelsum: 38.3405
- Gen Len: 246.7396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 0.324 | 0.97 | 14 | 2.2376 | 40.4388 | 16.4662 | 28.0771 | 38.3405 | 246.7396 |
| 0.2707 | 1.95 | 28 | 2.3204 | 40.2873 | 16.7641 | 27.3895 | 38.2689 | 307.3787 |
| 0.2217 | 2.99 | 43 | 2.5281 | 31.9916 | 13.8136 | 22.1895 | 30.623 | 501.9320 |
| 0.1776 | 3.97 | 57 | 2.7530 | 31.7535 | 13.8852 | 22.8653 | 30.3796 | 489.6183 |
| 0.1424 | 4.94 | 71 | 2.6578 | 32.117 | 14.2141 | 22.3733 | 30.8328 | 502.1124 |
| 0.1449 | 5.98 | 86 | 2.5508 | 35.3448 | 13.8478 | 24.9044 | 33.6108 | 357.3136 |
| 0.1191 | 6.96 | 100 | 3.1622 | 37.2189 | 16.0076 | 25.7011 | 35.294 | 408.8669 |
| 0.0879 | 8.0 | 115 | 2.8510 | 39.8825 | 16.8073 | 27.2428 | 37.9568 | 318.2278 |
| 0.0899 | 8.97 | 129 | 2.9138 | 31.7139 | 13.7066 | 21.8844 | 30.5075 | 500.4053 |
| 0.0656 | 9.95 | 143 | 3.1616 | 33.055 | 14.5841 | 22.5883 | 31.7565 | 488.1686 |
| 0.0542 | 10.99 | 158 | 3.3630 | 43.7514 | 18.9011 | 29.9017 | 41.6887 | 198.8077 |
| 0.0557 | 11.97 | 172 | 3.3826 | 42.3089 | 18.2735 | 29.0356 | 40.4154 | 270.9675 |
| 0.0542 | 12.94 | 186 | 3.4408 | 40.7691 | 16.529 | 28.3999 | 38.9723 | 186.7308 |
| 0.0596 | 13.98 | 201 | 3.5253 | 37.0037 | 15.9098 | 25.2808 | 35.3868 | 398.4704 |
| 0.0385 | 14.61 | 210 | 3.4990 | 32.5815 | 14.2951 | 22.4501 | 31.2928 | 499.3107 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
RogerB/KinyaBERT-small-pretrained-kinyarwanda | RogerB | 2023-09-26T11:02:39Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:jean-paul/KinyaBERT-small",
"base_model:finetune:jean-paul/KinyaBERT-small",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-26T10:51:04Z | ---
base_model: jean-paul/KinyaBERT-small
tags:
- generated_from_trainer
model-index:
- name: KinyaBERT-small-pretrained-kinyarwanda
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KinyaBERT-small-pretrained-kinyarwanda
This model is a fine-tuned version of [jean-paul/KinyaBERT-small](https://huggingface.co/jean-paul/KinyaBERT-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5078 | 1.0 | 2200 | 3.2187 |
| 3.278 | 2.0 | 4400 | 3.0892 |
| 3.1825 | 3.0 | 6600 | 3.0563 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
pe4enov/saiga_7b_lora_8bit | pe4enov | 2023-09-26T11:01:41Z | 1 | 0 | peft | [
"peft",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"region:us"
]
| null | 2023-07-24T09:28:29Z | ---
library_name: peft
base_model: huggyllama/llama-7b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
CyberHarem/nakasu_kasumi_loveliveschoolidolfestivalallstars | CyberHarem | 2023-09-26T10:42:46Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/nakasu_kasumi_loveliveschoolidolfestivalallstars",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-26T10:22:41Z | ---
license: mit
datasets:
- CyberHarem/nakasu_kasumi_loveliveschoolidolfestivalallstars
pipeline_tag: text-to-image
tags:
- art
---
# Lora of nakasu_kasumi_loveliveschoolidolfestivalallstars
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4160, you need to download `4160/nakasu_kasumi_loveliveschoolidolfestivalallstars.pt` as the embedding and `4160/nakasu_kasumi_loveliveschoolidolfestivalallstars.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4160**, with the score of 0.968. The trigger words are:
1. `nakasu_kasumi_loveliveschoolidolfestivalallstars`
2. `short_hair, bangs, red_eyes, brown_hair, blush, smile, bob_cut, hair_ornament, bow, asymmetrical_hair, grey_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | pattern_20 | pattern_21 | pattern_22 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.957 | [Download](7800/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/bikini.png) | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7280 | 0.968 | [Download](7280/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/bikini.png) | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.961 | [Download](6760/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/bikini.png) | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.952 | [Download](6240/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bikini.png) | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.958 | [Download](5720/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bikini.png) | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.963 | [Download](5200/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bikini.png) | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.953 | [Download](4680/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/bikini.png) | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| **4160** | **0.968** | [**Download**](4160/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/bikini.png) | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.962 | [Download](3640/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/bikini.png) | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.951 | [Download](3120/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/bikini.png) | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.960 | [Download](2600/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/bikini.png) | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.951 | [Download](2080/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/bikini.png) | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.913 | [Download](1560/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/bikini.png) | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.934 | [Download](1040/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/bikini.png) | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.880 | [Download](520/nakasu_kasumi_loveliveschoolidolfestivalallstars.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/bikini.png) | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
RogerB/kinyaRoberta-large-pretrained-kinyarwanda | RogerB | 2023-09-26T10:24:59Z | 133 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:jean-paul/kinyaRoberta-large",
"base_model:finetune:jean-paul/kinyaRoberta-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-26T10:06:06Z | ---
base_model: jean-paul/kinyaRoberta-large
tags:
- generated_from_trainer
model-index:
- name: kinyaRoberta-large-pretrained-kinyarwanda
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kinyaRoberta-large-pretrained-kinyarwanda
This model is a fine-tuned version of [jean-paul/kinyaRoberta-large](https://huggingface.co/jean-paul/kinyaRoberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5219 | 1.0 | 2200 | 3.1955 |
| 3.228 | 2.0 | 4400 | 3.0451 |
| 3.1224 | 3.0 | 6600 | 3.0429 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
cjdshr/my_awesome_billsum_model | cjdshr | 2023-09-26T10:24:22Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-12T08:03:45Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.14
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5783
- Rouge1: 0.14
- Rouge2: 0.0488
- Rougel: 0.1161
- Rougelsum: 0.1159
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8730 | 0.1268 | 0.0358 | 0.1053 | 0.1052 | 19.0 |
| No log | 2.0 | 124 | 2.6594 | 0.1352 | 0.0479 | 0.1123 | 0.1125 | 19.0 |
| No log | 3.0 | 186 | 2.5966 | 0.1369 | 0.0471 | 0.1139 | 0.1138 | 19.0 |
| No log | 4.0 | 248 | 2.5783 | 0.14 | 0.0488 | 0.1161 | 0.1159 | 19.0 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
s3nh/FreedomIntelligence-AceGPT-13B-chat-GGUF | s3nh | 2023-09-26T10:21:29Z | 7 | 1 | transformers | [
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-26T10:06:46Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/FreedomIntelligence/AceGPT-13B-chat).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
kmaksatk/controlnet_80k_data_blip_2 | kmaksatk | 2023-09-26T10:19:41Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-26T07:37:02Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-kmaksatk/controlnet_80k_data_blip_2
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
prompt: Man doing a cartwheel in blue suit

prompt: Man doing a cartwheel in blue suit

prompt: Man doing a cartwheel in blue suit

|
hasnain3142/phi-1_5-finetuned-gsm8k | hasnain3142 | 2023-09-26T10:17:45Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
]
| null | 2023-09-26T09:57:37Z | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-gsm8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-gsm8k
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
soczyste-milfy/cycate | soczyste-milfy | 2023-09-26T10:12:16Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-09-26T10:09:25Z | # Soczyste milfy są Wspaniałe
## Wstęp
Z biegiem lat zauważa się rosnący nacisk na młodość i urodę jako naczelną wartość w społeczeństwie. Jednak wiele osób zaczyna dostrzegać, że dojrzałość niesie ze sobą własny, niepowtarzalny urok i mądrość. W tym artykule postaramy się rozwiać mity dotyczące wieku i podkreślić, dlaczego <a href="https://unsee.pl/chetne-milfy">soczyste milfy</a> są wspaniałe na wiele różnych płaszczyzn.
## Doświadczenie życiowe
Z wiekiem przychodzi doświadczenie, które jest nieocenione w różnych aspektach życia. Soczyste milfy często mają bogatą historię, pełną różnorodnych doświadczeń, która sprawia, że są ciekawymi osobami, mającymi wiele do zaoferowania w rozmowach i relacjach.
## Pewność siebie
Latami pracy nad sobą i zdobytymi doświadczeniami dojrzałe kobiety zdobywają pewność siebie, której często brakuje młodszym osobom. Ta pewność siebie przejawia się nie tylko w zachowaniu, ale również w umiejętności podejmowania decyzji, zarządzania czasem i określania własnych priorytetów.
## Stabilność emocjonalna
Wraz z doświadczeniem życiowym i pewnością siebie przychodzi również stabilność emocjonalna. Soczyste milfy są często bardziej zrównoważone emocjonalnie, co sprawia, że są świetnym wsparciem dla partnera, dzieci czy przyjaciół.
## Mądrość
Nie da się ukryć, że dojrzałość często niesie ze sobą mądrość. Doświadczenia, zarówno dobre, jak i złe, uczą i kształtują charakter. Mądrość to nie tylko wiedza, ale również umiejętność jej zastosowania w praktyce, co jest nieocenione w trudnych sytuacjach życiowych.
## Zrozumienie własnych potrzeb
W młodości często zdarza się nam, że nie do końca rozumiemy, czego chcemy od życia. Soczyste milfy mają już jasno sprecyzowane potrzeby i cele, co sprawia, że są one bardziej spełnione i zadowolone z życia.
## Podsumowanie
Dojrzałe kobiety są wspaniałe na wiele różnych sposobów. Ich doświadczenie życiowe, pewność siebie, stabilność emocjonalna i mądrość czynią je niezwykle cennymi i inspirującymi osobami. Odejście od stereotypów dotyczących wieku i uznania wartości, jakie niesie ze sobą dojrzałość, to krok w stronę głębszego i bardziej satysfakcjonującego życia dla nas wszystkich. |
Mkmworld/all-classification | Mkmworld | 2023-09-26T10:07:00Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
]
| null | 2023-09-26T10:05:19Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 9.999999747378752e-05 |
| decay | 1e-05 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
prateeky2806/bert-base-uncased-qnli-ia3-epochs-2-lr-0.005 | prateeky2806 | 2023-09-26T10:05:51Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
]
| null | 2023-09-26T01:37:04Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-qnli-ia3-epochs-2-lr-0.005
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-qnli-ia3-epochs-2-lr-0.005
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3135
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3738 | 1.0 | 3271 | 0.3193 | 0.88 |
| 0.3316 | 2.0 | 6542 | 0.3135 | 0.88 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Dibyasha2023/sd-class-butterflies-32 | Dibyasha2023 | 2023-09-26T10:01:31Z | 45 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
]
| unconditional-image-generation | 2023-09-26T10:01:19Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Dibyasha2023/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Subsets and Splits