modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 12:29:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 523
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 12:28:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
junjuice0/VOXO-v0-4 | junjuice0 | 2023-09-28T08:26:25Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-28T08:26:25Z | ---
license: creativeml-openrail-m
---
|
asmaa1/videomae-base-groub10-finetuned-SLT-subset | asmaa1 | 2023-09-28T08:25:10Z | 61 | 0 | transformers | [
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2023-09-28T07:55:26Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-groub10-finetuned-SLT-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-groub10-finetuned-SLT-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7437
- Accuracy: 0.1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.25 | 5 | 2.9087 | 0.1 |
| 3.1062 | 1.25 | 10 | 2.8303 | 0.1 |
| 3.1062 | 2.25 | 15 | 2.7706 | 0.1 |
| 2.8191 | 3.25 | 20 | 2.7437 | 0.1 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
hw2942/chinese-lert-base-SSE50 | hw2942 | 2023-09-28T08:15:38Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:hfl/chinese-lert-base",
"base_model:finetune:hfl/chinese-lert-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-28T08:09:10Z | ---
license: apache-2.0
base_model: hfl/chinese-lert-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: chinese-lert-base-wallstreetcn-morning-news-market-overview-SSE50-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-lert-base-wallstreetcn-morning-news-market-overview-SSE50-10
This model is a fine-tuned version of [hfl/chinese-lert-base](https://huggingface.co/hfl/chinese-lert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3547
- Accuracy: 0.6364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 34 | 3.8141 | 0.6364 |
| No log | 2.0 | 68 | 3.0470 | 0.6667 |
| No log | 3.0 | 102 | 3.6099 | 0.6364 |
| No log | 4.0 | 136 | 3.5038 | 0.5758 |
| No log | 5.0 | 170 | 3.7060 | 0.6364 |
| No log | 6.0 | 204 | 3.6808 | 0.5758 |
| No log | 7.0 | 238 | 3.4109 | 0.6667 |
| No log | 8.0 | 272 | 3.9414 | 0.5455 |
| No log | 9.0 | 306 | 3.3539 | 0.6364 |
| No log | 10.0 | 340 | 3.3547 | 0.6364 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
dhrf/lora-Llama-2-7b-hf-qa-1epoch | dhrf | 2023-09-28T08:06:57Z | 0 | 1 | peft | [
"peft",
"region:us"
]
| null | 2023-09-28T07:51:26Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
yaojiapeng/vit-base-beans | yaojiapeng | 2023-09-28T08:02:56Z | 193 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-28T08:01:14Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9849624060150376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0861
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3095 | 1.0 | 130 | 0.2102 | 0.9774 |
| 0.2114 | 2.0 | 260 | 0.1360 | 0.9624 |
| 0.1861 | 3.0 | 390 | 0.1154 | 0.9699 |
| 0.0827 | 4.0 | 520 | 0.1022 | 0.9774 |
| 0.1281 | 5.0 | 650 | 0.0861 | 0.9850 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
oshita-n/textual_inversion_11 | oshita-n | 2023-09-28T08:01:20Z | 36 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-28T07:55:56Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - oshita-n/textual_inversion_11
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
TexR6/q-FrozenLake-v1-4x4-noSlippery | TexR6 | 2023-09-28T07:57:19Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-28T07:57:16Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="TexR6/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
npvinHnivqn/bloom-cot-small | npvinHnivqn | 2023-09-28T07:52:44Z | 3 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-24T05:47:57Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
eugene6/a2c-PandaReachDense-v3 | eugene6 | 2023-09-28T07:51:49Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-28T07:46:34Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.08
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
filipealmeida/Mistral-7B-Instruct-v0.1-GGUF | filipealmeida | 2023-09-28T07:50:49Z | 16 | 0 | null | [
"gguf",
"finetuned",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-28T07:33:46Z | ---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
---
# GGUF version of version of Mistral-7B-Instruct-v0.1
GGUF version of version of Mistral-7B-Instruct-v0.1 compatible with [llama.cpp](https://github.com/ggerganov/llama.cpp)
This is the unquantized fp16 version of the model.
# Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/)
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
encodeds = tokenizer(text, return_tensors="pt", add_special_tokens=False)
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |
samuelleecong/speecht5_finetuned_swahili | samuelleecong | 2023-09-28T07:48:36Z | 82 | 0 | transformers | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-speech | 2023-09-28T04:45:29Z | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_swahili
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_swahili
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the A kiswahili Dataset for Development of Text-To-Speech System dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4618
- eval_runtime: 11.2006
- eval_samples_per_second: 54.015
- eval_steps_per_second: 27.052
- epoch: 11.76
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
raghvendramall/esm2_t6_8M_UR50D-localization-v1-finetuned-localization | raghvendramall | 2023-09-28T07:35:12Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"esm",
"text-classification",
"generated_from_trainer",
"base_model:facebook/esm2_t6_8M_UR50D",
"base_model:finetune:facebook/esm2_t6_8M_UR50D",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-05-25T11:51:47Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
base_model: facebook/esm2_t6_8M_UR50D
model-index:
- name: esm2_t6_8M_UR50D-localization-v1-finetuned-localization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t6_8M_UR50D-localization-v1-finetuned-localization
This model is a fine-tuned version of [facebook/esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2123
- F1: 0.7355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4992 | 1.0 | 2048 | 0.5178 | 0.5567 |
| 0.4623 | 2.0 | 4096 | 0.3970 | 0.6536 |
| 0.3754 | 3.0 | 6144 | 0.5035 | 0.7153 |
| 0.3396 | 4.0 | 8192 | 0.6703 | 0.6598 |
| 0.2128 | 5.0 | 10240 | 0.7133 | 0.6876 |
| 0.1336 | 6.0 | 12288 | 0.9024 | 0.7065 |
| 0.0607 | 7.0 | 14336 | 0.9994 | 0.6841 |
| 0.025 | 8.0 | 16384 | 1.1050 | 0.7046 |
| 0.0098 | 9.0 | 18432 | 1.2199 | 0.7119 |
| 0.0047 | 10.0 | 20480 | 1.2123 | 0.7355 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
infCapital/llama2-7b-chat | infCapital | 2023-09-28T07:22:49Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-23T15:49:58Z | ---
{}
---
Clone from meta-llama/llama2-7b-chat-hf and extend vocab_size up to 44800 (Add Vietnamese vocabs), may be not suitable for general purpose |
metabloit/swahBERT | metabloit | 2023-09-28T07:11:41Z | 115 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"sw",
"dataset:metabloit/offensive-swahili-text",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-14T11:14:09Z | ---
license: mit
language:
- sw
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: v1
results:
- task:
type: Offensive words classifier
name: Text Classification
metrics:
- type: f1
value: 0.9272349272349272
name: F1 Score
verified: false
- type: precision
value: 0.9550321199143469
name: Precision
verified: false
- type: recall
value: 0.901010101010101
name: Recall
verified: false
- type: accuracy
value: 0.9292214357937311
name: Accuracy
verified: false
datasets:
- metabloit/offensive-swahili-text
---
# swahBERT
This model was fine tuned using the dataset listed below.
It achieves the following results on the evaluation set:
- Loss: 0.4982
- Accuracy: 0.9292
- Precision: 0.9550
- Recall: 0.9010
- F1: 0.9272
## Model description
This is a fine tuned swahBERT model. You can get the original model from [here](https://github.com/gatimartin/SwahBERT "swahBERT Model")
## Training and evaluation data
The model was fine tuned using [this dataset](https://huggingface.co/datasets/metabloit/offensive-swahili-text "Swahili offensive/non-offensive dataset")
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 310 | 0.6506 | 0.9282 | 0.9417 | 0.9131 | 0.9272 |
| 0.0189 | 2.0 | 620 | 0.4982 | 0.9292 | 0.9550 | 0.9010 | 0.9272 |
| 0.0189 | 3.0 | 930 | 0.5387 | 0.9323 | 0.9693 | 0.8929 | 0.9295 |
| 0.0314 | 4.0 | 1240 | 0.6365 | 0.9221 | 0.9524 | 0.8889 | 0.9195 |
| 0.0106 | 5.0 | 1550 | 0.6687 | 0.9282 | 0.9473 | 0.9071 | 0.9267 |
| 0.0106 | 6.0 | 1860 | 0.6671 | 0.9282 | 0.9454 | 0.9091 | 0.9269 |
| 0.0016 | 7.0 | 2170 | 0.6908 | 0.9242 | 0.9468 | 0.8990 | 0.9223 |
| 0.0016 | 8.0 | 2480 | 0.6832 | 0.9272 | 0.9471 | 0.9051 | 0.9256 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cpu
- Datasets 2.14.5
- Tokenizers 0.13.3
## References
@inproceedings{martin-etal-2022-swahbert,
title = "{S}wah{BERT}: Language Model of {S}wahili",
author = "Martin, Gati and Mswahili, Medard Edmund and Jeong, Young-Seob and Woo, Jiyoung",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.23",
pages = "303--313"
} |
asmaa1/videomae-base-groub8-finetuned-SLT-subset | asmaa1 | 2023-09-28T06:57:40Z | 61 | 0 | transformers | [
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2023-09-28T06:26:10Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-groub8-finetuned-SLT-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-groub8-finetuned-SLT-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7797
- Accuracy: 0.2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.25 | 5 | 2.9454 | 0.15 |
| 3.1466 | 1.25 | 10 | 2.8727 | 0.15 |
| 3.1466 | 2.25 | 15 | 2.8190 | 0.2 |
| 2.8589 | 3.25 | 20 | 2.7797 | 0.2 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
fishytorts/whisper-large-peft-lora-intent-voice-checker-v2 | fishytorts | 2023-09-28T06:55:00Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-27T15:05:51Z | ---
library_name: peft
---
## LoraConfig arguments
config = LoraConfig(r=32,
lora_alpha=64,
#target_modules=".*decoder.*(self_attn|encoder_attn).*(q_proj|v_proj)$",#["q_proj", "v_proj"],
target_modules=["q_proj", "v_proj"],
lora_dropout=0.05,
bias="none")
## Training arguments
training_args = TrainingArguments(
output_dir="temp", # change to a repo name of your choice
per_device_train_batch_size=8,
gradient_accumulation_steps=2, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-3,
warmup_steps=10,
max_steps=400, #1500
#evaluation_strategy="steps",
fp16=True,
per_device_eval_batch_size=8,
#generation_max_length=128,
eval_steps=100,
logging_steps=25,
remove_unused_columns=False, # required as the PeftModel forward doesn't have the signature of the wrapped model's forward
label_names=["label"], # same reason as above
)
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
CyberHarem/saito_kaede_encouragementofclimb | CyberHarem | 2023-09-28T06:39:51Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/saito_kaede_encouragementofclimb",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-28T06:17:48Z | ---
license: mit
datasets:
- CyberHarem/saito_kaede_encouragementofclimb
pipeline_tag: text-to-image
tags:
- art
---
# Lora of saito_kaede_encouragementofclimb
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 8400, you need to download `8400/saito_kaede_encouragementofclimb.pt` as the embedding and `8400/saito_kaede_encouragementofclimb.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 8400**, with the score of 0.972. The trigger words are:
1. `saito_kaede_encouragementofclimb`
2. `black_hair, glasses, blush, long_hair, hairclip, hair_ornament, blue_eyes, smile`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9000 | 0.940 | [Download](9000/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9000/previews/pattern_19.png) | [<NSFW, click to see>](9000/previews/bikini.png) | [<NSFW, click to see>](9000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9000/previews/nude.png) | [<NSFW, click to see>](9000/previews/nude2.png) |  |  |
| **8400** | **0.972** | [**Download**](8400/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8400/previews/pattern_19.png) | [<NSFW, click to see>](8400/previews/bikini.png) | [<NSFW, click to see>](8400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8400/previews/nude.png) | [<NSFW, click to see>](8400/previews/nude2.png) |  |  |
| 7800 | 0.938 | [Download](7800/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/pattern_19.png) | [<NSFW, click to see>](7800/previews/bikini.png) | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7200 | 0.955 | [Download](7200/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7200/previews/pattern_19.png) | [<NSFW, click to see>](7200/previews/bikini.png) | [<NSFW, click to see>](7200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| 6600 | 0.937 | [Download](6600/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6600/previews/pattern_19.png) | [<NSFW, click to see>](6600/previews/bikini.png) | [<NSFW, click to see>](6600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) |  |  |
| 6000 | 0.939 | [Download](6000/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/pattern_19.png) | [<NSFW, click to see>](6000/previews/bikini.png) | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5400 | 0.934 | [Download](5400/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/pattern_19.png) | [<NSFW, click to see>](5400/previews/bikini.png) | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4800 | 0.941 | [Download](4800/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/pattern_19.png) | [<NSFW, click to see>](4800/previews/bikini.png) | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4200 | 0.940 | [Download](4200/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4200/previews/pattern_19.png) | [<NSFW, click to see>](4200/previews/bikini.png) | [<NSFW, click to see>](4200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4200/previews/nude.png) | [<NSFW, click to see>](4200/previews/nude2.png) |  |  |
| 3600 | 0.910 | [Download](3600/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3600/previews/pattern_19.png) | [<NSFW, click to see>](3600/previews/bikini.png) | [<NSFW, click to see>](3600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) |  |  |
| 3000 | 0.929 | [Download](3000/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3000/previews/pattern_19.png) | [<NSFW, click to see>](3000/previews/bikini.png) | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2400 | 0.935 | [Download](2400/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/pattern_19.png) | [<NSFW, click to see>](2400/previews/bikini.png) | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1800 | 0.909 | [Download](1800/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1800/previews/pattern_19.png) | [<NSFW, click to see>](1800/previews/bikini.png) | [<NSFW, click to see>](1800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1800/previews/nude.png) | [<NSFW, click to see>](1800/previews/nude2.png) |  |  |
| 1200 | 0.887 | [Download](1200/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1200/previews/pattern_19.png) | [<NSFW, click to see>](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [<NSFW, click to see>](1200/previews/nude2.png) |  |  |
| 600 | 0.681 | [Download](600/saito_kaede_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](600/previews/pattern_19.png) | [<NSFW, click to see>](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [<NSFW, click to see>](600/previews/nude2.png) |  |  |
|
dss107/news3 | dss107 | 2023-09-28T06:26:18Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
]
| text-classification | 2023-09-28T06:25:03Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# dss107/news3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("dss107/news3")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
CyberHarem/tedeza_rize_istheorderarabbit | CyberHarem | 2023-09-28T06:21:50Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/tedeza_rize_istheorderarabbit",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-28T06:02:53Z | ---
license: mit
datasets:
- CyberHarem/tedeza_rize_istheorderarabbit
pipeline_tag: text-to-image
tags:
- art
---
# Lora of tedeza_rize_istheorderarabbit
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7800, you need to download `7800/tedeza_rize_istheorderarabbit.pt` as the embedding and `7800/tedeza_rize_istheorderarabbit.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7800**, with the score of 0.970. The trigger words are:
1. `tedeza_rize_istheorderarabbit`
2. `purple_hair, long_hair, twintails, purple_eyes, bangs, hair_ornament, blush, hairclip, hair_between_eyes, closed_mouth, indoors`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9000 | 0.964 | [Download](9000/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9000/previews/pattern_11.png) |  |  | [<NSFW, click to see>](9000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9000/previews/nude.png) | [<NSFW, click to see>](9000/previews/nude2.png) |  |  |
| 8400 | 0.966 | [Download](8400/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8400/previews/pattern_11.png) |  |  | [<NSFW, click to see>](8400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8400/previews/nude.png) | [<NSFW, click to see>](8400/previews/nude2.png) |  |  |
| **7800** | **0.970** | [**Download**](7800/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/pattern_11.png) |  |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7200 | 0.965 | [Download](7200/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7200/previews/pattern_11.png) |  |  | [<NSFW, click to see>](7200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| 6600 | 0.969 | [Download](6600/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6600/previews/pattern_11.png) |  |  | [<NSFW, click to see>](6600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) |  |  |
| 6000 | 0.921 | [Download](6000/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/pattern_11.png) |  |  | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5400 | 0.960 | [Download](5400/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/pattern_11.png) |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4800 | 0.963 | [Download](4800/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/pattern_11.png) |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4200 | 0.959 | [Download](4200/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4200/previews/pattern_11.png) |  |  | [<NSFW, click to see>](4200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4200/previews/nude.png) | [<NSFW, click to see>](4200/previews/nude2.png) |  |  |
| 3600 | 0.903 | [Download](3600/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3600/previews/pattern_11.png) |  |  | [<NSFW, click to see>](3600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) |  |  |
| 3000 | 0.932 | [Download](3000/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3000/previews/pattern_11.png) |  |  | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2400 | 0.927 | [Download](2400/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/pattern_11.png) |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1800 | 0.885 | [Download](1800/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1800/previews/pattern_11.png) |  |  | [<NSFW, click to see>](1800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1800/previews/nude.png) | [<NSFW, click to see>](1800/previews/nude2.png) |  |  |
| 1200 | 0.822 | [Download](1200/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1200/previews/pattern_11.png) |  |  | [<NSFW, click to see>](1200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [<NSFW, click to see>](1200/previews/nude2.png) |  |  |
| 600 | 0.687 | [Download](600/tedeza_rize_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](600/previews/pattern_11.png) |  |  | [<NSFW, click to see>](600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [<NSFW, click to see>](600/previews/nude2.png) |  |  |
|
Yntec/Cetus | Yntec | 2023-09-28T06:17:00Z | 417 | 3 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"2D",
"2.5D",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"Eagelaxis",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-08-29T04:42:12Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
language:
- en
tags:
- Anime
- 2D
- 2.5D
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- Eagelaxis
inference: true
---
# Cetus
When you think about a Cetus generation, you think about the 3.5 version. It's fp16-no-ema.
Samples and prompts:


Pretty cute girl. Like lesser birds on the four winds. Like silver scrapes in May. Now the sands become a crust. And most of you have gone away.
Original page:
https://civitai.com/models/6755?modelVersionId=29851
|
abvijaykumar/bloom-560m-prefix-tuned-qa | abvijaykumar | 2023-09-28T06:15:21Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-18T13:29:57Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
Gayathri142214002/Pegasus_paraphraser_2 | Gayathri142214002 | 2023-09-28T06:03:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-25T05:19:43Z | ---
tags:
- generated_from_trainer
model-index:
- name: Pegasus_paraphraser_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Pegasus_paraphraser_2
This model is a fine-tuned version of [Gayathri142214002/Pegasus_paraphraser_1](https://huggingface.co/Gayathri142214002/Pegasus_paraphraser_1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2589 | 0.45 | 1000 | 0.2488 |
| 0.2693 | 0.9 | 2000 | 0.2436 |
| 0.2255 | 1.35 | 3000 | 0.2632 |
| 0.2291 | 1.8 | 4000 | 0.2603 |
| 0.2092 | 2.25 | 5000 | 0.2714 |
| 0.1955 | 2.69 | 6000 | 0.2668 |
| 0.1893 | 3.14 | 7000 | 0.2802 |
| 0.1706 | 3.59 | 8000 | 0.2781 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
JeswinMS4/finetuned-llama-2 | JeswinMS4 | 2023-09-28T05:24:52Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-28T05:24:50Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
roa7n/gpt2-human_nontata_promoters-randomized_5_layers_0.003_lr_8_e | roa7n | 2023-09-28T05:19:26Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-28T05:19:23Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
CyberHarem/kafuu_chino_istheorderarabbit | CyberHarem | 2023-09-28T05:08:48Z | 0 | 1 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/kafuu_chino_istheorderarabbit",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-28T04:50:57Z | ---
license: mit
datasets:
- CyberHarem/kafuu_chino_istheorderarabbit
pipeline_tag: text-to-image
tags:
- art
---
# Lora of kafuu_chino_istheorderarabbit
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 8400, you need to download `8400/kafuu_chino_istheorderarabbit.pt` as the embedding and `8400/kafuu_chino_istheorderarabbit.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 8400**, with the score of 0.968. The trigger words are:
1. `kafuu_chino_istheorderarabbit`
2. `blue_hair, long_hair, blue_eyes, x_hair_ornament, hair_ornament, blush, bangs, closed_mouth, hair_between_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9000 | 0.968 | [Download](9000/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9000/previews/nude.png) | [<NSFW, click to see>](9000/previews/nude2.png) |  |  |
| **8400** | **0.968** | [**Download**](8400/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8400/previews/nude.png) | [<NSFW, click to see>](8400/previews/nude2.png) |  |  |
| 7800 | 0.967 | [Download](7800/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7200 | 0.968 | [Download](7200/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| 6600 | 0.963 | [Download](6600/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) |  |  |
| 6000 | 0.962 | [Download](6000/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5400 | 0.926 | [Download](5400/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5400/previews/nude.png) | [<NSFW, click to see>](5400/previews/nude2.png) |  |  |
| 4800 | 0.961 | [Download](4800/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4200 | 0.956 | [Download](4200/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4200/previews/nude.png) | [<NSFW, click to see>](4200/previews/nude2.png) |  |  |
| 3600 | 0.925 | [Download](3600/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) |  |  |
| 3000 | 0.935 | [Download](3000/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2400 | 0.935 | [Download](2400/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1800 | 0.900 | [Download](1800/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1800/previews/nude.png) | [<NSFW, click to see>](1800/previews/nude2.png) |  |  |
| 1200 | 0.848 | [Download](1200/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [<NSFW, click to see>](1200/previews/nude2.png) |  |  |
| 600 | 0.563 | [Download](600/kafuu_chino_istheorderarabbit.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [<NSFW, click to see>](600/previews/nude2.png) |  |  |
|
kensvin/sdss-cnn | kensvin | 2023-09-28T05:01:55Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"cnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-28T05:01:40Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sdss-cnn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sdss-cnn
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1573
- Accuracy: 0.9505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 100
- eval_batch_size: 100
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 80 | 0.4954 | 0.8635 |
| No log | 2.0 | 160 | 0.2788 | 0.9055 |
| No log | 3.0 | 240 | 0.2239 | 0.9085 |
| No log | 4.0 | 320 | 0.1991 | 0.9325 |
| No log | 5.0 | 400 | 0.1954 | 0.94 |
| No log | 6.0 | 480 | 0.1854 | 0.9445 |
| 0.3543 | 7.0 | 560 | 0.1891 | 0.9375 |
| 0.3543 | 8.0 | 640 | 0.1777 | 0.943 |
| 0.3543 | 9.0 | 720 | 0.1780 | 0.9415 |
| 0.3543 | 10.0 | 800 | 0.1804 | 0.942 |
| 0.3543 | 11.0 | 880 | 0.1734 | 0.9475 |
| 0.3543 | 12.0 | 960 | 0.1689 | 0.947 |
| 0.2022 | 13.0 | 1040 | 0.1698 | 0.9445 |
| 0.2022 | 14.0 | 1120 | 0.1689 | 0.9405 |
| 0.2022 | 15.0 | 1200 | 0.1650 | 0.9475 |
| 0.2022 | 16.0 | 1280 | 0.1755 | 0.934 |
| 0.2022 | 17.0 | 1360 | 0.1635 | 0.944 |
| 0.2022 | 18.0 | 1440 | 0.1711 | 0.942 |
| 0.1836 | 19.0 | 1520 | 0.1604 | 0.9485 |
| 0.1836 | 20.0 | 1600 | 0.1595 | 0.95 |
| 0.1836 | 21.0 | 1680 | 0.1613 | 0.9475 |
| 0.1836 | 22.0 | 1760 | 0.1579 | 0.949 |
| 0.1836 | 23.0 | 1840 | 0.1593 | 0.946 |
| 0.1836 | 24.0 | 1920 | 0.1579 | 0.945 |
| 0.167 | 25.0 | 2000 | 0.1584 | 0.9495 |
| 0.167 | 26.0 | 2080 | 0.1573 | 0.9505 |
| 0.167 | 27.0 | 2160 | 0.1596 | 0.945 |
| 0.167 | 28.0 | 2240 | 0.1599 | 0.9435 |
| 0.167 | 29.0 | 2320 | 0.1565 | 0.9485 |
| 0.167 | 30.0 | 2400 | 0.1582 | 0.946 |
| 0.167 | 31.0 | 2480 | 0.1563 | 0.95 |
| 0.1568 | 32.0 | 2560 | 0.1563 | 0.95 |
| 0.1568 | 33.0 | 2640 | 0.1573 | 0.9495 |
| 0.1568 | 34.0 | 2720 | 0.1564 | 0.9465 |
| 0.1568 | 35.0 | 2800 | 0.1557 | 0.95 |
| 0.1568 | 36.0 | 2880 | 0.1554 | 0.949 |
| 0.1568 | 37.0 | 2960 | 0.1562 | 0.948 |
| 0.1515 | 38.0 | 3040 | 0.1555 | 0.948 |
| 0.1515 | 39.0 | 3120 | 0.1557 | 0.95 |
| 0.1515 | 40.0 | 3200 | 0.1559 | 0.9485 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
oshita-n/textual_inversion_10 | oshita-n | 2023-09-28T04:58:03Z | 36 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-28T04:52:27Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - oshita-n/textual_inversion_10
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
imdatta0/internlm-huft | imdatta0 | 2023-09-28T04:57:35Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-28T04:57:32Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
zhengzhou/checkpoints | zhengzhou | 2023-09-28T04:29:56Z | 3 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-26T18:41:35Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of zly woman
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - zhengzhou/checkpoints
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of zly woman using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
asmaa1/videomae-base-groub6-finetuned-SLT-subset | asmaa1 | 2023-09-28T04:13:09Z | 60 | 0 | transformers | [
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2023-09-28T03:16:33Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-groub6-finetuned-SLT-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-groub6-finetuned-SLT-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7184
- Accuracy: 0.1905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 132
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.3751 | 0.16 | 21 | 2.9978 | 0.0952 |
| 3.3444 | 1.16 | 42 | 2.9361 | 0.1429 |
| 3.1148 | 2.16 | 63 | 2.8907 | 0.1429 |
| 3.1054 | 3.16 | 84 | 2.8089 | 0.1905 |
| 2.6316 | 4.16 | 105 | 2.7559 | 0.1905 |
| 2.9311 | 5.16 | 126 | 2.7195 | 0.1905 |
| 2.972 | 6.05 | 132 | 2.7184 | 0.1905 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
george24/hubbub-topics | george24 | 2023-09-28T04:09:36Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
]
| null | 2023-09-27T23:04:22Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: hubbub-topics
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubbub-topics
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5901
- Accuracy: 0.8152
- Precision: 0.8134
- Recall: 0.8152
- F1: 0.8079
## Model description
Hubbub Categories/Topics fine-tuned model
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.209 | 1.0 | 1406 | 1.0149 | 0.6644 | 0.6512 | 0.6644 | 0.6460 |
| 1.0161 | 2.0 | 2812 | 0.8027 | 0.7444 | 0.7414 | 0.7444 | 0.7327 |
| 0.7695 | 3.0 | 4218 | 0.5901 | 0.8152 | 0.8134 | 0.8152 | 0.8079 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
CyberHarem/yukimura_aoi_encouragementofclimb | CyberHarem | 2023-09-28T04:02:30Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/yukimura_aoi_encouragementofclimb",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-28T03:43:45Z | ---
license: mit
datasets:
- CyberHarem/yukimura_aoi_encouragementofclimb
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yukimura_aoi_encouragementofclimb
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 9240, you need to download `9240/yukimura_aoi_encouragementofclimb.pt` as the embedding and `9240/yukimura_aoi_encouragementofclimb.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 9240**, with the score of 0.913. The trigger words are:
1. `yukimura_aoi_encouragementofclimb`
2. `blush, short_hair, green_eyes, hair_ornament, hairclip, grey_hair, brown_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9900 | 0.896 | [Download](9900/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9900/previews/nude.png) | [<NSFW, click to see>](9900/previews/nude2.png) |  |  |
| **9240** | **0.913** | [**Download**](9240/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9240/previews/nude.png) | [<NSFW, click to see>](9240/previews/nude2.png) |  |  |
| 8580 | 0.902 | [Download](8580/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8580/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8580/previews/nude.png) | [<NSFW, click to see>](8580/previews/nude2.png) |  |  |
| 7920 | 0.855 | [Download](7920/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7920/previews/nude.png) | [<NSFW, click to see>](7920/previews/nude2.png) |  |  |
| 7260 | 0.893 | [Download](7260/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7260/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7260/previews/nude.png) | [<NSFW, click to see>](7260/previews/nude2.png) |  |  |
| 6600 | 0.891 | [Download](6600/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) |  |  |
| 5940 | 0.882 | [Download](5940/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5280 | 0.870 | [Download](5280/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4620 | 0.867 | [Download](4620/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4620/previews/nude.png) | [<NSFW, click to see>](4620/previews/nude2.png) |  |  |
| 3960 | 0.869 | [Download](3960/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) |  |  |
| 3300 | 0.888 | [Download](3300/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3300/previews/nude.png) | [<NSFW, click to see>](3300/previews/nude2.png) |  |  |
| 2640 | 0.863 | [Download](2640/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2640/previews/nude.png) | [<NSFW, click to see>](2640/previews/nude2.png) |  |  |
| 1980 | 0.796 | [Download](1980/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1980/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1980/previews/nude.png) | [<NSFW, click to see>](1980/previews/nude2.png) |  |  |
| 1320 | 0.791 | [Download](1320/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1320/previews/nude.png) | [<NSFW, click to see>](1320/previews/nude2.png) |  |  |
| 660 | 0.722 | [Download](660/yukimura_aoi_encouragementofclimb.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](660/previews/bondage.png) |  |  |  | [<NSFW, click to see>](660/previews/nude.png) | [<NSFW, click to see>](660/previews/nude2.png) |  |  |
|
roa7n/gpt2-human_nontata_promoters-randomized_5_layers_3e-05_lr_2_e | roa7n | 2023-09-28T03:30:19Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-28T03:30:16Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
checkiejan/multi-qa-mpnet-base-dot-v1-covidqa-search-multiple-negatives-loss | checkiejan | 2023-09-28T03:27:11Z | 13 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-09-28T03:26:44Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 595 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 50,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 59,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
xinli95/q-Taxi-v3 | xinli95 | 2023-09-28T03:24:25Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-28T03:24:23Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="xinli95/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
omidvaramin/HBART | omidvaramin | 2023-09-28T03:22:36Z | 111 | 1 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-24T21:02:54Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Hprophetnet-large
<!-- Provide a quick summary of what the model is/does. -->
This model is a fine-tuned version of [bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on Newsroom dataset to generate news headlines. To ask model to generate headliens
"Headline: " should be appended to the beginning of the article.
## Intended uses & limitations
You can use this model for headline generation task on English news articles.
### Usage
```python
article = """Two of the OPEC oil cartels 11 members, Nigeria and Venezuela, said today \
that they would voluntarily cut production in response to declining crude oil prices, which \
have fallen 20 percent from their peak two months ago.
The move, which would take less than 200,000 barrels of oil a day off the market, follows days \
of mixed signals from some OPEC officials, who have voiced increasing concern about the rapid \
drop in prices. Nigerias oil minister, Edmund Daukoru, who is president of OPEC this year, \
recently said the price of oil was very low.
Nigeria and Venezuela, which have generally been price hawks within the group, said their decision \
to cut production grew out of an informal deal reached at OPECs last meeting, earlier this month, \
to pare output if prices fell steeply. Some OPEC representatives have grown anxious at the slide in \
the oil futures markets, where prices for benchmark contracts have fallen from a midsummer high of \
$77.03 a barrel.
But traders shrugged off the announcement of the production cuts today. On the New York Mercantile \
Exchange, the most widely watched contract price light, low-sulfur crude for delivery next month \
traded this afternoon at $62.30 a barrel, down 0.7 percent.
Mr. Daukoru has been in contact with other OPEC ministers to discuss prices, which on Monday briefly \
slipped below $60 a barrel for the first time in six months. But the Organization of the Petroleum \
Exporting Countries, as the cartel is formally known, denied any shift in policy.
We are not currently concerned, a delegate from one of OPECs Gulf members said. The prices are \
currently manageable and fair. We are not overly alarmed by the prices. It is not a cause for alarm. \
It's the market working.
It is not unusual for oil prices to fall after Labor Day and the conclusion of the summer travel season. \
Demand tends to slow in the third quarter, and refiners reduce their output for seasonal maintenance; \
consumption picks up again with the first winter cold in the Western Hemisphere, and prices sometimes do as well.
We are not going to push extra oil in the market or force it down our customers throats, we just respond to demand, \
the delegate from the Gulf said.
Still, contradictory statements from senior OPEC representatives have sown doubt about the oil cartel's strategy. \
Whether OPEC countries actually reduce their output or not, the mixed messages have at least succeeded in one way: \
oil traders have been persuaded that OPEC is willing to step in to defend prices, and have traded on that belief, \
slowing the recent price decline.
While apparently fanciful, reports of an imminent output cut reflect two hard facts: stocks are building faster than \
expected, and several producers have an incredibly low pain threshold when it comes to price drops, Antoine Halff, an \
energy analyst with Fimat, wrote in a note to clients today. However, more price declines will likely be needed before \
OPEC producers decide on any coordinated move.
Venezuela, which pumps about 2.5 million barrels a day, said it would cut its daily output by 50,000 barrels, or about 2 \
percent, starting Oct. 1. Nigeria said it would trim its exports by 5 percent on the same date, a reduction of about \
120,000 barrels a day from its current output of about 3.8 million barrels a day.
They are trying to influence the psychology of the market, said Larry Goldstein, a veteran oil analyst and the president \
of the Petroleum Industry Research Foundation in New York. Although they are reacting to the reduction in demand, they \
are trying to convince the market that they are actually anticipating it, by making cuts ahead of the market. But they \
are simply reacting to it, which is how markets should operate."""
import transformers
import os
import torch
#If you have more than one GPU, you can specify here which one to use
os.environ["CUDA_VISIBLE_DEVICES"]="5"
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
#appending the task identifier to the beginning of input
article = "Headline: " + article
model = AutoModelForSeq2SeqLM.from_pretrained("omidvaramin/HBART").to(device)
tokenizer = AutoTokenizer.from_pretrained("omidvaramin/HBART")
#encodign article using tokenizer
encoding = tokenizer(article
, max_length=1024
, truncation=True
,return_tensors="pt"
,padding='longest')
input_ids = encoding['input_ids']
attention_masks = encoding['attention_mask']
#transfering the data into GPU
input_ids = input_ids.to(device)
attention_masks = attention_masks.to(device)
#generate headlines using kbeam technique
beam_outputs = model.generate(
input_ids = input_ids,
attention_mask = attention_masks
,do_sample = False
,num_beams = 4
,max_length = 20
,min_length = 1
,num_return_sequences = 1
)
result = tokenizer.batch_decode(beam_outputs,
skip_special_tokens=True)
print(result[0])
>>> [{'2 OPEC Nations Agree to Cut Oil Output'}]
```
### BibTeX entry and citation info
```bibtex
@ARTICLE{10154027,
author={Omidvar, Amin and An, Aijun},
journal={IEEE Access},
title={Learning to Generate Popular Headlines},
year={2023},
volume={11},
number={},
pages={60904-60914},
doi={10.1109/ACCESS.2023.3286853}}
|
oshita-n/textual_inversion_7 | oshita-n | 2023-09-28T03:22:00Z | 35 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-28T03:16:33Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - oshita-n/textual_inversion_7
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
xinli95/q-FrozenLake-v1-4x4-noSlippery | xinli95 | 2023-09-28T03:14:26Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-28T03:14:23Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="xinli95/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tylerkiser/ppo-Huggy | tylerkiser | 2023-09-28T03:11:51Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-09-28T03:06:07Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tylerkiser/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pablorfb/chatbot | pablorfb | 2023-09-28T02:59:30Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-28T02:59:27Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
oshita-n/textual_inversion_5 | oshita-n | 2023-09-28T02:53:14Z | 38 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-28T02:46:45Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - oshita-n/textual_inversion_5
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
micposso/spotted-lanterfly-reco | micposso | 2023-09-28T02:42:44Z | 0 | 0 | null | [
"biology",
"en",
"license:mit",
"region:us"
]
| null | 2023-09-21T18:53:16Z | ---
license: mit
language:
- en
tags:
- biology
---
## Spotter Lanternfly Image Detector
# This model can be used to identify spotted lanterflies at different growth states.
# You can try the model here https://teachablemachine.withgoogle.com/models/KdKkohSG2/ |
VuongQuoc/checkpoints_27_9_microsoft_deberta_21_9 | VuongQuoc | 2023-09-28T02:39:59Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"multiple-choice",
"generated_from_trainer",
"base_model:VuongQuoc/checkpoints_26_9_microsoft_deberta_21_9",
"base_model:finetune:VuongQuoc/checkpoints_26_9_microsoft_deberta_21_9",
"license:mit",
"endpoints_compatible",
"region:us"
]
| multiple-choice | 2023-09-26T16:11:41Z | ---
license: mit
base_model: VuongQuoc/checkpoints_26_9_microsoft_deberta_21_9
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: checkpoints_27_9_microsoft_deberta_21_9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints_27_9_microsoft_deberta_21_9
This model is a fine-tuned version of [VuongQuoc/checkpoints_26_9_microsoft_deberta_21_9](https://huggingface.co/VuongQuoc/checkpoints_26_9_microsoft_deberta_21_9) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6632
- Map@3: 0.8608
- Accuracy: 0.775
- MAX_INPUT = 256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map@3 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.6308 | 0.05 | 100 | 0.6775 | 0.8842 | 0.815 |
| 0.3472 | 0.11 | 200 | 0.7255 | 0.8767 | 0.805 |
| 0.2267 | 0.16 | 300 | 0.7786 | 0.8608 | 0.785 |
| 0.143 | 0.21 | 400 | 0.8580 | 0.8333 | 0.735 |
| 0.0723 | 0.27 | 500 | 0.9517 | 0.8358 | 0.735 |
| 0.3952 | 0.32 | 600 | 0.6632 | 0.8608 | 0.775 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.9.0
- Tokenizers 0.13.3
|
oshita-n/textual_inversion_4 | oshita-n | 2023-09-28T02:33:04Z | 35 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-28T02:27:37Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - oshita-n/textual_inversion_4
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
roa7n/gpt2-human_nontata_promoters-randomized_5_layers_0.003_lr_2_e | roa7n | 2023-09-28T02:25:09Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-28T02:25:07Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
checkiejan/multi-qa-mpnet-base-dot-v1-covidqa-search-triplet-100 | checkiejan | 2023-09-28T02:24:39Z | 13 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2023-09-28T02:24:14Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 298 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 50,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 29,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
CyberHarem/fujimiya_konomi_nonnonbiyori | CyberHarem | 2023-09-28T02:02:40Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/fujimiya_konomi_nonnonbiyori",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-28T01:49:02Z | ---
license: mit
datasets:
- CyberHarem/fujimiya_konomi_nonnonbiyori
pipeline_tag: text-to-image
tags:
- art
---
# Lora of fujimiya_konomi_nonnonbiyori
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4420, you need to download `4420/fujimiya_konomi_nonnonbiyori.pt` as the embedding and `4420/fujimiya_konomi_nonnonbiyori.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4420**, with the score of 0.918. The trigger words are:
1. `fujimiya_konomi_nonnonbiyori`
2. `brown_hair, hair_ornament, hairclip, long_hair, purple_eyes, braid, smile, blush`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.877 | [Download](5100/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.915 | [Download](4760/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| **4420** | **0.918** | [**Download**](4420/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.916 | [Download](4080/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.915 | [Download](3740/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.872 | [Download](3400/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.892 | [Download](3060/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.869 | [Download](2720/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.875 | [Download](2380/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.857 | [Download](2040/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.843 | [Download](1700/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.845 | [Download](1360/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.803 | [Download](1020/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.676 | [Download](680/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.571 | [Download](340/fujimiya_konomi_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
mchen-hf-2023/Pixelcopter-PLE-v0 | mchen-hf-2023 | 2023-09-28T02:00:04Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-28T01:59:33Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 45.20 +/- 23.36
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
roa7n/gpt2-human_nontata_promoters-randomized_4_layers_3e-05_lr_8_e | roa7n | 2023-09-28T01:52:35Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-28T01:52:31Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
CyberHarem/miyauchi_hikage_nonnonbiyori | CyberHarem | 2023-09-28T01:18:37Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/miyauchi_hikage_nonnonbiyori",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-28T01:05:11Z | ---
license: mit
datasets:
- CyberHarem/miyauchi_hikage_nonnonbiyori
pipeline_tag: text-to-image
tags:
- art
---
# Lora of miyauchi_hikage_nonnonbiyori
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6000, you need to download `6000/miyauchi_hikage_nonnonbiyori.pt` as the embedding and `6000/miyauchi_hikage_nonnonbiyori.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6000**, with the score of 0.892. The trigger words are:
1. `miyauchi_hikage_nonnonbiyori`
2. `blue_eyes, purple_hair, blush, one_side_up, black_hair, short_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **6000** | **0.892** | [**Download**](6000/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5600 | 0.881 | [Download](5600/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5600/previews/nude.png) | [<NSFW, click to see>](5600/previews/nude2.png) |  |  |
| 5200 | 0.891 | [Download](5200/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4800 | 0.874 | [Download](4800/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4400 | 0.873 | [Download](4400/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4400/previews/nude.png) | [<NSFW, click to see>](4400/previews/nude2.png) |  |  |
| 4000 | 0.878 | [Download](4000/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| 3600 | 0.867 | [Download](3600/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) |  |  |
| 3200 | 0.859 | [Download](3200/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3200/previews/nude.png) | [<NSFW, click to see>](3200/previews/nude2.png) |  |  |
| 2800 | 0.787 | [Download](2800/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2800/previews/nude.png) | [<NSFW, click to see>](2800/previews/nude2.png) |  |  |
| 2400 | 0.796 | [Download](2400/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 2000 | 0.829 | [Download](2000/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1600 | 0.802 | [Download](1600/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1600/previews/nude.png) | [<NSFW, click to see>](1600/previews/nude2.png) |  |  |
| 1200 | 0.698 | [Download](1200/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [<NSFW, click to see>](1200/previews/nude2.png) |  |  |
| 800 | 0.737 | [Download](800/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [<NSFW, click to see>](800/previews/nude2.png) |  |  |
| 400 | 0.600 | [Download](400/miyauchi_hikage_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [<NSFW, click to see>](400/previews/nude2.png) |  |  |
|
CyberHarem/miyauchi_kazuho_nonnonbiyori | CyberHarem | 2023-09-28T00:30:09Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/miyauchi_kazuho_nonnonbiyori",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-28T00:16:50Z | ---
license: mit
datasets:
- CyberHarem/miyauchi_kazuho_nonnonbiyori
pipeline_tag: text-to-image
tags:
- art
---
# Lora of miyauchi_kazuho_nonnonbiyori
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4420, you need to download `4420/miyauchi_kazuho_nonnonbiyori.pt` as the embedding and `4420/miyauchi_kazuho_nonnonbiyori.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4420**, with the score of 0.991. The trigger words are:
1. `miyauchi_kazuho_nonnonbiyori`
2. `closed_eyes, purple_hair, long_hair, smile, ponytail, blue_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.926 | [Download](5100/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.947 | [Download](4760/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| **4420** | **0.991** | [**Download**](4420/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.883 | [Download](4080/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.989 | [Download](3740/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.880 | [Download](3400/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.882 | [Download](3060/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.824 | [Download](2720/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.872 | [Download](2380/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.931 | [Download](2040/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.925 | [Download](1700/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.861 | [Download](1360/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.872 | [Download](1020/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.806 | [Download](680/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.602 | [Download](340/miyauchi_kazuho_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
line-corporation/japanese-large-lm-3.6b-instruction-sft-8bit-1g-actorder_True | line-corporation | 2023-09-28T00:02:06Z | 84 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-26T06:16:23Z | ---
license: apache-2.0
inference: false
language: ja
---
# japanese-large-lm-3.6b-instruction-sft-8bit-1g-actorder_True
This repository provides a 3.6B parameters Japanese language **quantized** model, fine-tuned and trained by [LINE Corporation](https://linecorp.com/ja/).
## For Japanese
詳細な説明や実験に関しては「[【インターンレポート】量子化による大規模言語モデル軽量化の効果測定](https://engineering.linecorp.com/ja/blog/quantization-lightweighting-llms)」をご覧ください。
## How to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("line-corporation/japanese-large-lm-3.6b-instruction-sft", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("line-corporation/japanese-large-lm-3.6b-instruction-sft-8bit-1g-actorder_True")
generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
input_text = """四国の県名を全て列挙してください。"""
text = generator(
f"ユーザー: {input_text}\nシステム: ",
max_length = 256,
do_sample = True,
temperature = 0.7,
top_p = 0.9,
top_k = 0,
repetition_penalty = 1.1,
num_beams = 1,
pad_token_id = tokenizer.pad_token_id,
num_return_sequences = 1,
)
print(text) # [{'generated_text': 'ユーザー: 四国の県名を全て列挙してください。\nシステム: 高知県、徳島県、香川県、愛媛県'}]
```
## Tokenization
We use a sentencepiece tokenizer with a unigram language model and byte-fallback.
We **do not** apply pre-tokenization with Japanese tokenizer.
Thus, a user may directly feed raw sentences into the tokenizer.
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
line-corporation/japanese-large-lm-3.6b-instruction-sft-4bit-32g-actorder_False | line-corporation | 2023-09-27T23:56:05Z | 79 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-26T06:15:51Z | ---
license: apache-2.0
inference: false
language: ja
---
# japanese-large-lm-3.6b-instruction-sft-4bit-32g-actorder_False
This repository provides a 3.6B parameters Japanese language **quantized** model, fine-tuned and trained by [LINE Corporation](https://linecorp.com/ja/).
## For Japanese
詳細な説明や実験に関しては「[【インターンレポート】量子化による大規模言語モデル軽量化の効果測定](https://engineering.linecorp.com/ja/blog/quantization-lightweighting-llms)」をご覧ください。
## How to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("line-corporation/japanese-large-lm-3.6b-instruction-sft", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("line-corporation/japanese-large-lm-3.6b-instruction-sft-4bit-32g-actorder_False")
generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
input_text = """四国の県名を全て列挙してください。"""
text = generator(
f"ユーザー: {input_text}\nシステム: ",
max_length = 256,
do_sample = True,
temperature = 0.7,
top_p = 0.9,
top_k = 0,
repetition_penalty = 1.1,
num_beams = 1,
pad_token_id = tokenizer.pad_token_id,
num_return_sequences = 1,
)
print(text) # [{'generated_text': 'ユーザー: 四国の県名を全て列挙してください。\nシステム: 高知県、徳島県、香川県、愛媛県'}]
```
## Tokenization
We use a sentencepiece tokenizer with a unigram language model and byte-fallback.
We **do not** apply pre-tokenization with Japanese tokenizer.
Thus, a user may directly feed raw sentences into the tokenizer.
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
line-corporation/japanese-large-lm-3.6b-instruction-sft-4bit-128g-actorder_False | line-corporation | 2023-09-27T23:54:44Z | 81 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-26T06:16:04Z | ---
license: apache-2.0
inference: false
language: ja
---
# japanese-large-lm-3.6b-instruction-sft-4bit-128g-actorder_False
This repository provides a 3.6B parameters Japanese language **quantized** model, fine-tuned and trained by [LINE Corporation](https://linecorp.com/ja/).
## For Japanese
詳細な説明や実験に関しては「[【インターンレポート】量子化による大規模言語モデル軽量化の効果測定](https://engineering.linecorp.com/ja/blog/quantization-lightweighting-llms)」をご覧ください。
## How to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("line-corporation/japanese-large-lm-3.6b-instruction-sft", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("line-corporation/japanese-large-lm-3.6b-instruction-sft-4bit-128g-actorder_False")
generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
input_text = """四国の県名を全て列挙してください。"""
text = generator(
f"ユーザー: {input_text}\nシステム: ",
max_length = 256,
do_sample = True,
temperature = 0.7,
top_p = 0.9,
top_k = 0,
repetition_penalty = 1.1,
num_beams = 1,
pad_token_id = tokenizer.pad_token_id,
num_return_sequences = 1,
)
print(text) # [{'generated_text': 'ユーザー: 四国の県名を全て列挙してください。\nシステム: 高知県、徳島県、香川県、愛媛県'}]
```
## Tokenization
We use a sentencepiece tokenizer with a unigram language model and byte-fallback.
We **do not** apply pre-tokenization with Japanese tokenizer.
Thus, a user may directly feed raw sentences into the tokenizer.
## License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
GraydientPlatformAPI/ether | GraydientPlatformAPI | 2023-09-27T23:49:38Z | 29 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-27T23:38:26Z | ---
library_name: diffusers
pipeline_tag: text-to-image
--- |
badokorach/flan-t5-small-qa-9 | badokorach | 2023-09-27T23:49:06Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:badokorach/flan-t5-small-qa",
"base_model:finetune:badokorach/flan-t5-small-qa",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-27T22:13:23Z | ---
license: apache-2.0
base_model: badokorach/flan-t5-small-qa
tags:
- generated_from_trainer
model-index:
- name: flan-t5-small-qa-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-qa-9
This model is a fine-tuned version of [badokorach/flan-t5-small-qa](https://huggingface.co/badokorach/flan-t5-small-qa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 305 | 0.0747 |
| 0.061 | 2.0 | 610 | 0.0791 |
| 0.061 | 3.0 | 915 | 0.0798 |
| 0.052 | 4.0 | 1220 | 0.0845 |
| 0.0481 | 5.0 | 1525 | 0.0807 |
| 0.0481 | 6.0 | 1830 | 0.0837 |
| 0.0443 | 7.0 | 2135 | 0.0888 |
| 0.0443 | 8.0 | 2440 | 0.0890 |
| 0.0413 | 9.0 | 2745 | 0.0869 |
| 0.0381 | 10.0 | 3050 | 0.0905 |
| 0.0381 | 11.0 | 3355 | 0.0903 |
| 0.0356 | 12.0 | 3660 | 0.0900 |
| 0.0356 | 13.0 | 3965 | 0.0915 |
| 0.0341 | 14.0 | 4270 | 0.0937 |
| 0.0325 | 15.0 | 4575 | 0.0949 |
| 0.0325 | 16.0 | 4880 | 0.0943 |
| 0.0306 | 17.0 | 5185 | 0.0953 |
| 0.0306 | 18.0 | 5490 | 0.0948 |
| 0.0301 | 19.0 | 5795 | 0.0966 |
| 0.0288 | 20.0 | 6100 | 0.0969 |
| 0.0288 | 21.0 | 6405 | 0.0976 |
| 0.0279 | 22.0 | 6710 | 0.0987 |
| 0.0275 | 23.0 | 7015 | 0.0984 |
| 0.0275 | 24.0 | 7320 | 0.0975 |
| 0.027 | 25.0 | 7625 | 0.0979 |
| 0.027 | 26.0 | 7930 | 0.0984 |
| 0.0261 | 27.0 | 8235 | 0.0991 |
| 0.026 | 28.0 | 8540 | 0.0992 |
| 0.026 | 29.0 | 8845 | 0.0990 |
| 0.0259 | 30.0 | 9150 | 0.0989 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
CyberHarem/kagayama_kaede_nonnonbiyori | CyberHarem | 2023-09-27T23:47:19Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/kagayama_kaede_nonnonbiyori",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-27T23:33:47Z | ---
license: mit
datasets:
- CyberHarem/kagayama_kaede_nonnonbiyori
pipeline_tag: text-to-image
tags:
- art
---
# Lora of kagayama_kaede_nonnonbiyori
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 2660, you need to download `2660/kagayama_kaede_nonnonbiyori.pt` as the embedding and `2660/kagayama_kaede_nonnonbiyori.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 2660**, with the score of 0.905. The trigger words are:
1. `kagayama_kaede_nonnonbiyori`
2. `blonde_hair, long_hair, brown_eyes, ahoge`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5700 | 0.903 | [Download](5700/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5700/previews/nude.png) | [<NSFW, click to see>](5700/previews/nude2.png) |  |  |
| 5320 | 0.812 | [Download](5320/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5320/previews/nude.png) | [<NSFW, click to see>](5320/previews/nude2.png) |  |  |
| 4940 | 0.820 | [Download](4940/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4940/previews/nude.png) | [<NSFW, click to see>](4940/previews/nude2.png) |  |  |
| 4560 | 0.886 | [Download](4560/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4560/previews/nude.png) | [<NSFW, click to see>](4560/previews/nude2.png) |  |  |
| 4180 | 0.897 | [Download](4180/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4180/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4180/previews/nude.png) | [<NSFW, click to see>](4180/previews/nude2.png) |  |  |
| 3800 | 0.809 | [Download](3800/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3800/previews/nude.png) | [<NSFW, click to see>](3800/previews/nude2.png) |  |  |
| 3420 | 0.897 | [Download](3420/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3420/previews/nude.png) | [<NSFW, click to see>](3420/previews/nude2.png) |  |  |
| 3040 | 0.834 | [Download](3040/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3040/previews/nude.png) | [<NSFW, click to see>](3040/previews/nude2.png) |  |  |
| **2660** | **0.905** | [**Download**](2660/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2660/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2660/previews/nude.png) | [<NSFW, click to see>](2660/previews/nude2.png) |  |  |
| 2280 | 0.852 | [Download](2280/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2280/previews/nude.png) | [<NSFW, click to see>](2280/previews/nude2.png) |  |  |
| 1900 | 0.860 | [Download](1900/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1900/previews/nude.png) | [<NSFW, click to see>](1900/previews/nude2.png) |  |  |
| 1520 | 0.846 | [Download](1520/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1520/previews/nude.png) | [<NSFW, click to see>](1520/previews/nude2.png) |  |  |
| 1140 | 0.877 | [Download](1140/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1140/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1140/previews/nude.png) | [<NSFW, click to see>](1140/previews/nude2.png) |  |  |
| 760 | 0.680 | [Download](760/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](760/previews/nude.png) | [<NSFW, click to see>](760/previews/nude2.png) |  |  |
| 380 | 0.553 | [Download](380/kagayama_kaede_nonnonbiyori.zip) |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](380/previews/nude.png) | [<NSFW, click to see>](380/previews/nude2.png) |  |  |
|
HazemHM/Reinforce-CartPole-V1 | HazemHM | 2023-09-27T23:43:13Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-27T23:43:02Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-V1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 499.90 +/- 0.30
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
HazemHM/Reinforce-CartPoleV1 | HazemHM | 2023-09-27T23:24:05Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-27T23:23:53Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPoleV1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 482.50 +/- 52.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vagmi/squeal | vagmi | 2023-09-27T22:52:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:b-mc2/sql-create-context",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-27T15:57:37Z | ---
license: apache-2.0
datasets:
- b-mc2/sql-create-context
language:
- en
library_name: transformers
---
# Generate SQL from text - Squeal
Please use the code below as an example for how to use this model.
```python
import torch
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
def load_model(model_name):
# Load tokenizer and model with QLoRA configuration
compute_dtype = getattr(torch, 'float16')
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=False,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map={"": 0},
quantization_config=bnb_config
)
# Load Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
return model, tokenizer
model, tokenizer = load_model('vagmi/squeal')
prompt = "<s>[INST] Output SQL for the given table structure \n \
CREATE TABLE votes (contestant_number VARCHAR, num_votes int); \
CREATE TABLE contestants (contestant_number VARCHAR, contestant_name VARCHAR); \
What is the contestant number and name of the contestant who got least votes?[/INST]"
pipe = pipeline(task="text-generation",
model=model,
tokenizer=tokenizer,
max_length=200,
device_map='auto', )
result = pipe(prompt)
print(result[0]['generated_text'][len(prompt):-1])
```
## How I built it?
Watch me build this model.
https://www.youtube.com/watch?v=PNFhAfxR_d8
Here is the notebook I used to train this model.
https://colab.research.google.com/drive/1jYX8AlRMTY7F_dH3hCFM4ljg5qEmCoUe#scrollTo=IUILKaGWhBxS
|
ayymen/crnn_mobilenet_v3_large_tifinagh | ayymen | 2023-09-27T22:46:25Z | 56 | 4 | transformers | [
"transformers",
"pytorch",
"OCR",
"zgh",
"ber",
"taq",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-27T22:10:35Z | ---
language:
- zgh
- ber
- taq
tags:
- OCR
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
### Run Configuration
{
"arch": "crnn_mobilenet_v3_large",
"train_path": "train",
"val_path": "val",
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 12,
"name": "crnn_mobilenet_v3_large_tifinagh",
"epochs": 1,
"batch_size": 64,
"device": null,
"input_size": 32,
"lr": 0.001,
"weight_decay": 0,
"workers": 2,
"resume": "crnn_mobilenet_v3_large_tifinagh.pt",
"vocab": "tamazight",
"test_only": false,
"show_samples": false,
"wb": true,
"push_to_hub": true,
"pretrained": false,
"sched": "cosine",
"amp": false,
"find_lr": false
} |
akashmaggon/llam2fullmodel | akashmaggon | 2023-09-27T22:43:48Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-27T22:43:01Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
Globaly/globaly-1-llama2-7b-NSHF-v0.3 | Globaly | 2023-09-27T22:16:13Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-27T22:15:40Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
fmagot01/whisper-small-dv-second | fmagot01 | 2023-09-27T22:15:40Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-09-27T14:17:13Z | ---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
metrics:
- name: Wer
type: wer
value: 0.13502799318426817
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1689
- Wer Ortho: 0.6258
- Wer: 0.1350
- Cer: 0.0963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.1995 | 0.81 | 250 | 0.2387 | 0.7319 | 0.1888 | 0.1330 |
| 0.1215 | 1.63 | 500 | 0.1689 | 0.6258 | 0.1350 | 0.0963 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Mintrz/Loobe-3 | Mintrz | 2023-09-27T22:11:36Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-27T22:04:08Z | ---
license: other
license_name: d
license_link: LICENSE
---
|
LarryAIDraw/beidou_genshin | LarryAIDraw | 2023-09-27T22:09:41Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-27T22:06:11Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/130976/beidou-genshin-impact |
LarryAIDraw/Shinonono_Tabane | LarryAIDraw | 2023-09-27T22:09:24Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-27T22:05:45Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/152683/shinonono-tabaneinfinite-stratos |
LarryAIDraw/ichinose_shiki_v1 | LarryAIDraw | 2023-09-27T22:09:13Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-27T22:05:15Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/152587/ichinose-shiki-the-idolmster-cinderella-girls |
LarryAIDraw/Feise-08 | LarryAIDraw | 2023-09-27T22:09:04Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-27T22:04:42Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/152854/fei-se-tower-of-fantasy |
LarryAIDraw/Tohru-10 | LarryAIDraw | 2023-09-27T22:08:55Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-27T22:04:20Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/152827/tohru-miss-kobayashis-dragon-maid-lora |
LarryAIDraw/yor-08 | LarryAIDraw | 2023-09-27T22:08:12Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-27T22:02:40Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/152721/yor-or-spy-family |
ahof1704/brainlm | ahof1704 | 2023-09-27T21:40:48Z | 0 | 2 | null | [
"arxiv:1910.09700",
"region:us"
]
| null | 2023-09-27T21:13:41Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# BrainLM model
<!-- Provide a quick summary of what the model is/does. -->
The pretrained model of Brain Language Model (BrainLM) aims to achieve a general understanding of brain dynamics through self-supervised masked prediction. It is introduced in [this paper](https://www.biorxiv.org/content/10.1101/2023.09.12.557460v1) and its code is available at [this repository](https://github.com/vandijklab/BrainLM)
## Model Details
### Model Description
We introduce the Brain Language Model (BrainLM), a foundation model for brain activity dynamics trained on 6,700 hours of fMRI recordings. Utilizing self-supervised masked-prediction training, BrainLM demonstrates proficiency in both fine-tuning and zero-shot inference tasks. Fine-tuning allows for the prediction of clinical variables and future brain states. In zero-shot inference, the model identifies functional networks and generates interpretable latent representations of neural activity. Furthermore, we introduce a novel prompting technique, allowing BrainLM to function as an in silico simulator of brain activity responses to perturbations. BrainLM offers a novel framework for the analysis and understanding of large-scale brain activity data, serving as a “lens” through which new data can be more effectively interpreted.
- **Developed by:** [van Dijk Lab](https://www.vandijklab.org/) at Yale University
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/vandijklab/BrainLM
- **Paper:** https://www.biorxiv.org/content/10.1101/2023.09.12.557460v1
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@article{ortega2023brainlm,
title={BrainLM: A foundation model for brain activity recordings},
author={Ortega Caro, Josue and Oliveira Fonseca, Antonio Henrique and Averill, Christopher and Rizvi, Syed A and Rosati, Matteo and Cross, James L and Mittal, Prateek and Zappala, Emanuele and Levine, Daniel and Dhodapkar, Rahul M and others},
journal={bioRxiv},
pages={2023--09},
year={2023},
publisher={Cold Spring Harbor Laboratory}
}
```
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GuilhermeGPGil/First_DRL_Model | GuilhermeGPGil | 2023-09-27T21:35:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-27T21:35:17Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.72 +/- 52.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kevinzeng/ppo-LunarLander-v2 | kevinzeng | 2023-09-27T21:32:01Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-27T21:31:40Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.94 +/- 16.62
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SaffalPoosh/system_design_expert | SaffalPoosh | 2023-09-27T21:20:06Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-27T14:13:08Z | ---
language:
- en
pipeline_tag: text-generation
---
This is llama2 7B finetuned using qlora with bf16 as compute dtype. The dataset has been generated using open-ai api with samples semantics oriented towards abstract explanation of system design.
lora has been merged into the original model, 3 peochs have been trained with batch size of 16.
```bash
from google.colab import drive
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import pipeline
model_path = "SaffalPoosh/system_design_expert"
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
prompt = "Design an application like Whatsapp with tech stack you will use"
gen = pipeline('text-generation', model=model, tokenizer=tokenizer)
result = gen(prompt)
print(result[0]['generated_text'])
``` |
noahgift/hf_fine_tune_hello_world | noahgift | 2023-09-27T21:10:19Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-10-24T15:58:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
base_model: bert-base-cased
model-index:
- name: hf_fine_tune_hello_world
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: train
args: yelp_review_full
metrics:
- type: accuracy
value: 0.562
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hf_fine_tune_hello_world
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0594
- Accuracy: 0.562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.2177 | 0.467 |
| No log | 2.0 | 250 | 1.0214 | 0.569 |
| No log | 3.0 | 375 | 1.0594 | 0.562 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
|
oshita-n/textual_inversion_2 | oshita-n | 2023-09-27T20:43:23Z | 38 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-27T19:55:34Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - oshita-n/textual_inversion_2
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
JEdappully/Taxi | JEdappully | 2023-09-27T20:41:57Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-27T20:41:52Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JEdappully/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DouglasPontes/2020-Q2-filtered_tweets_tok_prog | DouglasPontes | 2023-09-27T20:38:18Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-23T14:08:01Z | ---
tags:
- generated_from_trainer
model-index:
- name: 2020-Q2-filtered_tweets_tok_prog
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2020-Q2-filtered_tweets_tok_prog
This model is a fine-tuned version of [DouglasPontes/2020-Q1-full_tweets_tok](https://huggingface.co/DouglasPontes/2020-Q1-full_tweets_tok) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1400
- training_steps: 2400000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| No log | 0.03 | 8000 | 7.0102 |
| 7.2267 | 0.07 | 16000 | 6.9542 |
| 7.2267 | 0.1 | 24000 | 6.9403 |
| 6.9704 | 0.14 | 32000 | 6.8324 |
| 6.9704 | 0.17 | 40000 | 6.7820 |
| 6.8499 | 0.21 | 48000 | 6.7417 |
| 6.8499 | 0.24 | 56000 | 6.6973 |
| 6.729 | 0.28 | 64000 | 6.6547 |
| 6.729 | 0.31 | 72000 | 6.6154 |
| 6.6453 | 0.35 | 80000 | 6.5520 |
| 6.6453 | 0.38 | 88000 | 6.5154 |
| 6.555 | 0.42 | 96000 | 6.4724 |
| 6.555 | 0.45 | 104000 | 6.4253 |
| 6.4797 | 0.49 | 112000 | 6.4060 |
| 6.4797 | 0.52 | 120000 | 6.3705 |
| 6.4146 | 0.56 | 128000 | 6.3289 |
| 6.4146 | 0.59 | 136000 | 6.3175 |
| 6.3623 | 0.63 | 144000 | 6.2903 |
| 6.3623 | 0.66 | 152000 | 6.2669 |
| 6.3233 | 0.7 | 160000 | 6.2329 |
| 6.3233 | 0.73 | 168000 | 6.2148 |
| 6.2846 | 0.77 | 176000 | 6.2140 |
| 6.2846 | 0.8 | 184000 | 6.1774 |
| 6.2548 | 0.84 | 192000 | 6.1518 |
| 6.2548 | 0.87 | 200000 | 6.1421 |
| 6.2163 | 0.91 | 208000 | 6.1221 |
| 6.2163 | 0.94 | 216000 | 6.1063 |
| 6.1854 | 0.98 | 224000 | 6.0982 |
| 6.1854 | 1.01 | 232000 | 6.0752 |
| 6.157 | 1.05 | 240000 | 6.0771 |
| 6.157 | 1.08 | 248000 | 6.0410 |
| 6.1311 | 1.12 | 256000 | 6.0283 |
| 6.1311 | 1.15 | 264000 | 6.0268 |
| 6.1132 | 1.19 | 272000 | 6.0237 |
| 6.1132 | 1.22 | 280000 | 6.0218 |
| 6.0771 | 1.26 | 288000 | 5.9890 |
| 6.0771 | 1.29 | 296000 | 5.9574 |
| 6.0559 | 1.33 | 304000 | 5.9871 |
| 6.0559 | 1.36 | 312000 | 5.9688 |
| 6.0159 | 1.4 | 320000 | 5.9408 |
| 6.0159 | 1.43 | 328000 | 5.9212 |
| 6.0085 | 1.47 | 336000 | 5.9064 |
| 6.0085 | 1.5 | 344000 | 5.9124 |
| 5.9947 | 1.54 | 352000 | 5.9012 |
| 5.9947 | 1.57 | 360000 | 5.8873 |
| 5.9726 | 1.61 | 368000 | 5.8993 |
| 5.9726 | 1.64 | 376000 | 5.8968 |
| 5.9668 | 1.68 | 384000 | 5.8790 |
| 5.9668 | 1.71 | 392000 | 5.8659 |
| 5.958 | 1.75 | 400000 | 5.8856 |
| 5.958 | 1.78 | 408000 | 5.8503 |
| 5.9476 | 1.82 | 416000 | 5.8859 |
| 5.9476 | 1.85 | 424000 | 5.8909 |
| 5.9195 | 1.89 | 432000 | 5.8603 |
| 5.9195 | 1.92 | 440000 | 5.8370 |
| 5.9143 | 1.96 | 448000 | 5.8232 |
| 5.9143 | 1.99 | 456000 | 5.8213 |
| 5.8991 | 2.03 | 464000 | 5.8196 |
| 5.8991 | 2.06 | 472000 | 5.8079 |
| 5.8735 | 2.09 | 480000 | 5.7811 |
| 5.8735 | 2.13 | 488000 | 5.7851 |
| 5.855 | 2.16 | 496000 | 5.7738 |
| 5.855 | 2.2 | 504000 | 5.7488 |
| 5.8666 | 2.23 | 512000 | 5.7699 |
| 5.8666 | 2.27 | 520000 | 5.7531 |
| 5.8256 | 2.3 | 528000 | 5.7357 |
| 5.8256 | 2.34 | 536000 | 5.7426 |
| 5.8222 | 2.37 | 544000 | 5.7376 |
| 5.8222 | 2.41 | 552000 | 5.7224 |
| 5.8097 | 2.44 | 560000 | 5.7088 |
| 5.8097 | 2.48 | 568000 | 5.7054 |
| 5.8077 | 2.51 | 576000 | 5.6899 |
| 5.8077 | 2.55 | 584000 | 5.6957 |
| 5.7859 | 2.58 | 592000 | 5.6851 |
| 5.7859 | 2.62 | 600000 | 5.7154 |
| 5.7823 | 2.65 | 608000 | 5.7051 |
| 5.7823 | 2.69 | 616000 | 5.6641 |
| 5.7714 | 2.72 | 624000 | 5.6700 |
| 5.7714 | 2.76 | 632000 | 5.6546 |
| 5.7686 | 2.79 | 640000 | 5.6435 |
| 5.7686 | 2.83 | 648000 | 5.6450 |
| 5.7483 | 2.86 | 656000 | 5.6132 |
| 5.7483 | 2.9 | 664000 | 5.6289 |
| 5.7308 | 2.93 | 672000 | 5.6310 |
| 5.7308 | 2.97 | 680000 | 5.6176 |
| 5.7201 | 3.0 | 688000 | 5.6278 |
| 5.7201 | 3.04 | 696000 | 5.6315 |
| 5.7202 | 3.07 | 704000 | 5.6112 |
| 5.7202 | 3.11 | 712000 | 5.6397 |
| 5.6954 | 3.14 | 720000 | 5.5901 |
| 5.6954 | 3.18 | 728000 | 5.5947 |
| 5.6794 | 3.21 | 736000 | 5.6044 |
| 5.6794 | 3.25 | 744000 | 5.5823 |
| 5.676 | 3.28 | 752000 | 5.5610 |
| 5.676 | 3.32 | 760000 | 5.5880 |
| 5.6746 | 3.35 | 768000 | 5.5645 |
| 5.6746 | 3.39 | 776000 | 5.5577 |
| 5.6617 | 3.42 | 784000 | 5.5687 |
| 5.6617 | 3.46 | 792000 | 5.5711 |
| 5.6519 | 3.49 | 800000 | 5.5424 |
| 5.6519 | 3.53 | 808000 | 5.5436 |
| 5.6453 | 3.56 | 816000 | 5.5545 |
| 5.6453 | 3.6 | 824000 | 5.5590 |
| 5.634 | 3.63 | 832000 | 5.5475 |
| 5.634 | 3.67 | 840000 | 5.5399 |
| 5.6364 | 3.7 | 848000 | 5.5167 |
| 5.6364 | 3.74 | 856000 | 5.5586 |
| 5.642 | 3.77 | 864000 | 5.5230 |
| 5.642 | 3.81 | 872000 | 5.5323 |
| 5.6453 | 3.84 | 880000 | 5.5151 |
| 5.6453 | 3.88 | 888000 | 5.5105 |
| 5.6174 | 3.91 | 896000 | 5.5233 |
| 5.6174 | 3.95 | 904000 | 5.5111 |
| 5.6076 | 3.98 | 912000 | 5.5201 |
| 5.6076 | 4.02 | 920000 | 5.5210 |
| 5.6179 | 4.05 | 928000 | 5.5249 |
| 5.6179 | 4.08 | 936000 | 5.4997 |
| 5.6125 | 4.12 | 944000 | 5.4942 |
| 5.6125 | 4.15 | 952000 | 5.5023 |
| 5.5992 | 4.19 | 960000 | 5.5026 |
| 5.5992 | 4.22 | 968000 | 5.5102 |
| 5.6088 | 4.26 | 976000 | 5.4885 |
| 5.6088 | 4.29 | 984000 | 5.4878 |
| 5.5988 | 4.33 | 992000 | 5.4941 |
| 5.5988 | 4.36 | 1000000 | 5.4859 |
| 5.5807 | 4.4 | 1008000 | 5.5001 |
| 5.5807 | 4.43 | 1016000 | 5.4815 |
| 5.5729 | 4.47 | 1024000 | 5.4762 |
| 5.5729 | 4.5 | 1032000 | 5.4702 |
| 5.5735 | 4.54 | 1040000 | 5.4680 |
| 5.5735 | 4.57 | 1048000 | 5.4746 |
| 5.5697 | 4.61 | 1056000 | 5.4505 |
| 5.5697 | 4.64 | 1064000 | 5.4598 |
| 5.5519 | 4.68 | 1072000 | 5.4463 |
| 5.5519 | 4.71 | 1080000 | 5.4462 |
| 5.5609 | 4.75 | 1088000 | 5.4327 |
| 5.5609 | 4.78 | 1096000 | 5.4424 |
| 5.5297 | 4.82 | 1104000 | 5.4504 |
| 5.5297 | 4.85 | 1112000 | 5.4250 |
| 5.5337 | 4.89 | 1120000 | 5.4178 |
| 5.5337 | 4.92 | 1128000 | 5.4223 |
| 5.5188 | 4.96 | 1136000 | 5.4344 |
| 5.5188 | 4.99 | 1144000 | 5.4237 |
| 5.5252 | 5.03 | 1152000 | 5.4352 |
| 5.5252 | 5.06 | 1160000 | 5.4122 |
| 5.5079 | 5.1 | 1168000 | 5.3956 |
| 5.5079 | 5.13 | 1176000 | 5.4041 |
| 5.5087 | 5.17 | 1184000 | 5.4014 |
| 5.5087 | 5.2 | 1192000 | 5.4066 |
| 5.4815 | 5.24 | 1200000 | 5.4048 |
| 5.4815 | 5.27 | 1208000 | 5.4176 |
| 5.5038 | 5.31 | 1216000 | 5.3841 |
| 5.5038 | 5.34 | 1224000 | 5.4197 |
| 5.5111 | 5.38 | 1232000 | 5.4098 |
| 5.5111 | 5.41 | 1240000 | 5.3933 |
| 5.4898 | 5.45 | 1248000 | 5.3870 |
| 5.4898 | 5.48 | 1256000 | 5.3909 |
| 5.4883 | 5.52 | 1264000 | 5.3741 |
| 5.4883 | 5.55 | 1272000 | 5.3825 |
| 5.489 | 5.59 | 1280000 | 5.3820 |
| 5.489 | 5.62 | 1288000 | 5.3900 |
| 5.4895 | 5.66 | 1296000 | 5.3884 |
| 5.4895 | 5.69 | 1304000 | 5.3957 |
| 5.4738 | 5.73 | 1312000 | 5.3762 |
| 5.4738 | 5.76 | 1320000 | 5.3720 |
| 5.4736 | 5.8 | 1328000 | 5.3955 |
| 5.4736 | 5.83 | 1336000 | 5.3632 |
| 5.4768 | 5.87 | 1344000 | 5.3807 |
| 5.4768 | 5.9 | 1352000 | 5.3680 |
| 5.4676 | 5.94 | 1360000 | 5.3807 |
| 5.4676 | 5.97 | 1368000 | 5.3685 |
| 5.4728 | 6.01 | 1376000 | 5.3745 |
| 5.4728 | 6.04 | 1384000 | 5.3591 |
| 5.4594 | 6.08 | 1392000 | 5.3641 |
| 5.4594 | 6.11 | 1400000 | 5.3577 |
| 5.4551 | 6.14 | 1408000 | 5.3704 |
| 5.4551 | 6.18 | 1416000 | 5.3587 |
| 5.4434 | 6.21 | 1424000 | 5.3646 |
| 5.4434 | 6.25 | 1432000 | 5.3644 |
| 5.4479 | 6.28 | 1440000 | 5.3500 |
| 5.4479 | 6.32 | 1448000 | 5.3695 |
| 5.447 | 6.35 | 1456000 | 5.3418 |
| 5.447 | 6.39 | 1464000 | 5.3468 |
| 5.4295 | 6.42 | 1472000 | 5.3460 |
| 5.4295 | 6.46 | 1480000 | 5.3491 |
| 5.4461 | 6.49 | 1488000 | 5.3509 |
| 5.4461 | 6.53 | 1496000 | 5.3335 |
| 5.4491 | 6.56 | 1504000 | 5.3422 |
| 5.4491 | 6.6 | 1512000 | 5.3506 |
| 5.4518 | 6.63 | 1520000 | 5.3481 |
| 5.4518 | 6.67 | 1528000 | 5.3398 |
| 5.442 | 6.7 | 1536000 | 5.3202 |
| 5.442 | 6.74 | 1544000 | 5.3221 |
| 5.4266 | 6.77 | 1552000 | 5.3344 |
| 5.4266 | 6.81 | 1560000 | 5.3331 |
| 5.4185 | 6.84 | 1568000 | 5.3406 |
| 5.4185 | 6.88 | 1576000 | 5.3246 |
| 5.4162 | 6.91 | 1584000 | 5.3317 |
| 5.4162 | 6.95 | 1592000 | 5.3198 |
| 5.425 | 6.98 | 1600000 | 5.3128 |
| 5.425 | 7.02 | 1608000 | 5.3174 |
| 5.4018 | 7.05 | 1616000 | 5.3192 |
| 5.4018 | 7.09 | 1624000 | 5.3178 |
| 5.4084 | 7.12 | 1632000 | 5.3163 |
| 5.4084 | 7.16 | 1640000 | 5.3155 |
| 5.4211 | 7.19 | 1648000 | 5.3180 |
| 5.4211 | 7.23 | 1656000 | 5.3208 |
| 5.4087 | 7.26 | 1664000 | 5.3175 |
| 5.4087 | 7.3 | 1672000 | 5.3004 |
| 5.3983 | 7.33 | 1680000 | 5.3081 |
| 5.3983 | 7.37 | 1688000 | 5.3048 |
| 5.4004 | 7.4 | 1696000 | 5.3077 |
| 5.4004 | 7.44 | 1704000 | 5.2859 |
| 5.3888 | 7.47 | 1712000 | 5.3083 |
| 5.3888 | 7.51 | 1720000 | 5.3010 |
| 5.3834 | 7.54 | 1728000 | 5.2991 |
| 5.3834 | 7.58 | 1736000 | 5.2878 |
| 5.379 | 7.61 | 1744000 | 5.2785 |
| 5.379 | 7.65 | 1752000 | 5.2871 |
| 5.3872 | 7.68 | 1760000 | 5.3042 |
| 5.3872 | 7.72 | 1768000 | 5.2847 |
| 5.3891 | 7.75 | 1776000 | 5.3002 |
| 5.3891 | 7.79 | 1784000 | 5.2793 |
| 5.3915 | 7.82 | 1792000 | 5.2721 |
| 5.3915 | 7.86 | 1800000 | 5.2710 |
| 5.3786 | 7.89 | 1808000 | 5.2894 |
| 5.3786 | 7.93 | 1816000 | 5.2897 |
| 5.3802 | 7.96 | 1824000 | 5.2838 |
| 5.3802 | 8.0 | 1832000 | 5.2762 |
| 5.3681 | 8.03 | 1840000 | 5.2869 |
| 5.3681 | 8.07 | 1848000 | 5.2630 |
| 5.3658 | 8.1 | 1856000 | 5.2833 |
| 5.3658 | 8.13 | 1864000 | 5.2774 |
| 5.3674 | 8.17 | 1872000 | 5.2680 |
| 5.3674 | 8.2 | 1880000 | 5.2601 |
| 5.3626 | 8.24 | 1888000 | 5.2669 |
| 5.3626 | 8.27 | 1896000 | 5.2480 |
| 5.3588 | 8.31 | 1904000 | 5.2580 |
| 5.3588 | 8.34 | 1912000 | 5.2707 |
| 5.3503 | 8.38 | 1920000 | 5.2699 |
| 5.3503 | 8.41 | 1928000 | 5.2660 |
| 5.3505 | 8.45 | 1936000 | 5.2469 |
| 5.3505 | 8.48 | 1944000 | 5.2541 |
| 5.3543 | 8.52 | 1952000 | 5.2568 |
| 5.3543 | 8.55 | 1960000 | 5.2691 |
| 5.3503 | 8.59 | 1968000 | 5.2508 |
| 5.3503 | 8.62 | 1976000 | 5.2467 |
| 5.348 | 8.66 | 1984000 | 5.2731 |
| 5.348 | 8.69 | 1992000 | 5.2624 |
| 5.3519 | 8.73 | 2000000 | 5.2682 |
| 5.3519 | 8.76 | 2008000 | 5.2457 |
| 5.3303 | 8.8 | 2016000 | 5.2627 |
| 5.3303 | 8.83 | 2024000 | 5.2619 |
| 5.3418 | 8.87 | 2032000 | 5.2428 |
| 5.3418 | 8.9 | 2040000 | 5.2523 |
| 5.3525 | 8.94 | 2048000 | 5.2514 |
| 5.3525 | 8.97 | 2056000 | 5.2533 |
| 5.3332 | 9.01 | 2064000 | 5.2367 |
| 5.3332 | 9.04 | 2072000 | 5.2391 |
| 5.3352 | 9.08 | 2080000 | 5.2304 |
| 5.3352 | 9.11 | 2088000 | 5.2329 |
| 5.3434 | 9.15 | 2096000 | 5.2337 |
| 5.3434 | 9.18 | 2104000 | 5.2364 |
| 5.3205 | 9.22 | 2112000 | 5.2368 |
| 5.3205 | 9.25 | 2120000 | 5.2304 |
| 5.3216 | 9.29 | 2128000 | 5.2256 |
| 5.3216 | 9.32 | 2136000 | 5.2172 |
| 5.3247 | 9.36 | 2144000 | 5.2261 |
| 5.3247 | 9.39 | 2152000 | 5.2383 |
| 5.3249 | 9.43 | 2160000 | 5.2242 |
| 5.3249 | 9.46 | 2168000 | 5.2455 |
| 5.3054 | 9.5 | 2176000 | 5.2404 |
| 5.3054 | 9.53 | 2184000 | 5.2329 |
| 5.3182 | 9.57 | 2192000 | 5.2129 |
| 5.3182 | 9.6 | 2200000 | 5.2111 |
| 5.3119 | 9.64 | 2208000 | 5.2214 |
| 5.3119 | 9.67 | 2216000 | 5.2236 |
| 5.302 | 9.71 | 2224000 | 5.2206 |
| 5.302 | 9.74 | 2232000 | 5.2170 |
| 5.3074 | 9.78 | 2240000 | 5.2258 |
| 5.3074 | 9.81 | 2248000 | 5.2059 |
| 5.3098 | 9.85 | 2256000 | 5.2100 |
| 5.3098 | 9.88 | 2264000 | 5.2124 |
| 5.294 | 9.92 | 2272000 | 5.2088 |
| 5.294 | 9.95 | 2280000 | 5.2018 |
| 5.3123 | 9.99 | 2288000 | 5.2135 |
| 5.3123 | 10.02 | 2296000 | 5.2197 |
| 5.3061 | 10.06 | 2304000 | 5.2147 |
| 5.3061 | 10.09 | 2312000 | 5.2134 |
| 5.2906 | 10.13 | 2320000 | 5.2046 |
| 5.2906 | 10.16 | 2328000 | 5.2007 |
| 5.2974 | 10.19 | 2336000 | 5.2045 |
| 5.2974 | 10.23 | 2344000 | 5.2041 |
| 5.2964 | 10.26 | 2352000 | 5.1983 |
| 5.2964 | 10.3 | 2360000 | 5.2027 |
| 5.3104 | 10.33 | 2368000 | 5.1968 |
| 5.3104 | 10.37 | 2376000 | 5.2040 |
| 5.2933 | 10.4 | 2384000 | 5.2189 |
| 5.2933 | 10.44 | 2392000 | 5.2054 |
| 5.307 | 10.47 | 2400000 | 5.2138 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
JEdappully/q-FrozenLake-v1-4x4-noSlippery | JEdappully | 2023-09-27T20:32:28Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-27T20:32:24Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.67 +/- 0.47
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="JEdappully/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kupru/a2c-PandaPickAndPlace-v3 | kupru | 2023-09-27T20:31:52Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-27T20:25:49Z | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
roa7n/gpt2-human_nontata_promoters-randomized_4_layers_3e-05_lr_2_e | roa7n | 2023-09-27T20:25:15Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-27T20:25:12Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
grakshit/squad_a_r_1160_bal | grakshit | 2023-09-27T20:24:14Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-27T20:21:39Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: squad_a_r_1160_bal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad_a_r_1160_bal
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6484
- Accuracy: 0.6782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 35 | 0.6942 | 0.4368 |
| No log | 2.0 | 70 | 0.6090 | 0.6724 |
| No log | 3.0 | 105 | 0.6323 | 0.6897 |
| No log | 4.0 | 140 | 0.6484 | 0.6782 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
JOSEDURANisc/practicaNLP | JOSEDURANisc | 2023-09-27T20:22:13Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-27T20:10:49Z | ---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: practicaNLP
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8357843137254902
- name: F1
type: f1
value: 0.8846815834767642
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# practicaNLP
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7396
- Accuracy: 0.8358
- F1: 0.8847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4012 | 1.09 | 500 | 0.5515 | 0.8235 | 0.8763 |
| 0.3039 | 2.18 | 1000 | 0.7396 | 0.8358 | 0.8847 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
eugene6/ppo-Pyramids | eugene6 | 2023-09-27T20:10:57Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-09-27T20:10:53Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: eugene6/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LoneStriker/Mistral-7B-Instruct-v0.1-8.0bpw-exl2 | LoneStriker | 2023-09-27T19:55:06Z | 5 | 2 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"finetuned",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-27T19:18:08Z | ---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
---
# ExLLaMA v2 quantization of Mistral-7B-Instruct-v0.1
Use [text-generation-webui](https://github.com/oobabooga/text-generation-webui) or [exllamav2](https://github.com/turboderp/exllamav2)
# Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/)
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
encodeds = tokenizer(instructions, return_tensors="pt", add_special_tokens=False)
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
roa7n/gpt2-human_nontata_promoters-randomized_4_layers_0.0003_lr_2_e | roa7n | 2023-09-27T19:52:40Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-27T19:52:37Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
LoneStriker/Mistral-7B-Instruct-v0.1-6.0bpw-exl2 | LoneStriker | 2023-09-27T19:49:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"finetuned",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-27T19:17:57Z | ---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
---
# ExLLaMA v2 quantization of Mistral-7B-Instruct-v0.1
Use [text-generation-webui](https://github.com/oobabooga/text-generation-webui) or [exllamav2](https://github.com/turboderp/exllamav2)
# Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/)
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
encodeds = tokenizer(instructions, return_tensors="pt", add_special_tokens=False)
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
LoneStriker/Mistral-7B-Instruct-v0.1-5.0bpw-exl2 | LoneStriker | 2023-09-27T19:46:35Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"finetuned",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-27T19:17:46Z | ---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
---
# ExLLaMA v2 quantization of Mistral-7B-Instruct-v0.1
Use [text-generation-webui](https://github.com/oobabooga/text-generation-webui) or [exllamav2](https://github.com/turboderp/exllamav2)
# Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/)
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
encodeds = tokenizer(instructions, return_tensors="pt", add_special_tokens=False)
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
LoneStriker/Mistral-7B-Instruct-v0.1-3.0bpw-exl2 | LoneStriker | 2023-09-27T19:46:01Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"finetuned",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-27T19:17:13Z | ---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
---
# ExLLaMA v2 quantization of Mistral-7B-Instruct-v0.1
Use [text-generation-webui](https://github.com/oobabooga/text-generation-webui) or [exllamav2](https://github.com/turboderp/exllamav2)
# Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/)
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
encodeds = tokenizer(instructions, return_tensors="pt", add_special_tokens=False)
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
texasdave2/flan-t5-base-samsum | texasdave2 | 2023-09-27T19:43:10Z | 86 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-23T04:26:27Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan-t5-base-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 47.1046
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-samsum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3859
- Rouge1: 47.1046
- Rouge2: 23.264
- Rougel: 39.2757
- Rougelsum: 43.2598
- Gen Len: 17.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.5121 | 0.08 | 50 | 1.4287 | 46.7868 | 22.863 | 38.971 | 42.8209 | 16.9634 |
| 1.46 | 0.16 | 100 | 1.4199 | 46.8031 | 22.8195 | 39.0708 | 42.8717 | 17.2393 |
| 1.4515 | 0.24 | 150 | 1.4147 | 46.6849 | 23.0376 | 38.9434 | 42.8344 | 17.1245 |
| 1.4679 | 0.33 | 200 | 1.4121 | 46.8756 | 22.8504 | 39.1671 | 43.1892 | 17.3431 |
| 1.451 | 0.41 | 250 | 1.4109 | 46.8572 | 23.09 | 39.2939 | 43.2955 | 17.2686 |
| 1.4434 | 0.49 | 300 | 1.4040 | 46.6829 | 23.071 | 39.3131 | 43.1432 | 16.9158 |
| 1.4417 | 0.57 | 350 | 1.4007 | 46.8637 | 23.0661 | 39.2462 | 43.1897 | 17.1172 |
| 1.4781 | 0.65 | 400 | 1.3952 | 46.8511 | 23.1134 | 39.3071 | 43.2164 | 17.2076 |
| 1.4626 | 0.73 | 450 | 1.3940 | 47.1533 | 23.2771 | 39.3094 | 43.2806 | 17.2222 |
| 1.4307 | 0.81 | 500 | 1.3955 | 46.9527 | 23.2227 | 39.2844 | 43.1903 | 17.2002 |
| 1.4586 | 0.9 | 550 | 1.3933 | 46.7523 | 23.1759 | 39.2675 | 43.1588 | 17.3040 |
| 1.4465 | 0.98 | 600 | 1.3905 | 46.855 | 23.3518 | 39.2879 | 43.2145 | 17.3468 |
| 1.381 | 1.06 | 650 | 1.3953 | 46.9719 | 22.9788 | 39.0886 | 43.1892 | 17.4066 |
| 1.4125 | 1.14 | 700 | 1.3922 | 46.535 | 23.0956 | 38.9275 | 42.9811 | 17.2381 |
| 1.3667 | 1.22 | 750 | 1.3922 | 47.3311 | 23.4123 | 39.5412 | 43.5624 | 17.2930 |
| 1.3878 | 1.3 | 800 | 1.3953 | 46.6737 | 23.2153 | 39.2982 | 43.2596 | 17.3358 |
| 1.3884 | 1.38 | 850 | 1.3931 | 46.9764 | 23.1561 | 39.1606 | 43.2115 | 17.3614 |
| 1.3766 | 1.47 | 900 | 1.3898 | 47.0466 | 23.1674 | 39.2822 | 43.293 | 17.3333 |
| 1.3727 | 1.55 | 950 | 1.3889 | 46.7311 | 23.0837 | 39.0882 | 43.0072 | 17.3211 |
| 1.4001 | 1.63 | 1000 | 1.3859 | 47.1046 | 23.264 | 39.2757 | 43.2598 | 17.3333 |
| 1.3894 | 1.71 | 1050 | 1.3874 | 47.2479 | 23.3762 | 39.4723 | 43.5241 | 17.3297 |
| 1.3697 | 1.79 | 1100 | 1.3860 | 47.1037 | 23.3894 | 39.3848 | 43.3875 | 17.3504 |
| 1.3886 | 1.87 | 1150 | 1.3862 | 47.0714 | 23.3937 | 39.4181 | 43.3841 | 17.3260 |
| 1.4037 | 1.95 | 1200 | 1.3861 | 47.0725 | 23.4085 | 39.3575 | 43.3676 | 17.3321 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.0+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
mami99/my_first_model | mami99 | 2023-09-27T19:43:02Z | 94 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-27T19:00:15Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_first_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5806451612903226
- name: Recall
type: recall
value: 0.3002780352177943
- name: F1
type: f1
value: 0.39584605986560784
- name: Accuracy
type: accuracy
value: 0.9416869736223333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_first_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2670
- Precision: 0.5806
- Recall: 0.3003
- F1: 0.3958
- Accuracy: 0.9417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2772 | 0.6181 | 0.2595 | 0.3655 | 0.9395 |
| No log | 2.0 | 426 | 0.2670 | 0.5806 | 0.3003 | 0.3958 | 0.9417 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.2
|
Pavanb/fincausal_robertalarge_lora_spanish | Pavanb | 2023-09-27T19:40:16Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-27T17:07:08Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
tasnim7ahmed/Huggy-v1 | tasnim7ahmed | 2023-09-27T19:29:54Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-09-27T19:08:09Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tasnim7ahmed/Huggy-v1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
RogerB/bert-base-uncased-kinyarwanda-finetuned | RogerB | 2023-09-27T19:14:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-27T18:27:04Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-kinyarwanda-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-kinyarwanda-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5872 | 1.0 | 5500 | 1.9887 |
| 1.9819 | 2.0 | 11000 | 1.7608 |
| 1.8218 | 3.0 | 16500 | 1.7007 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
JiemingYou/a2c-PandaReachDense-v3 | JiemingYou | 2023-09-27T19:09:05Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-27T19:03:26Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.16 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Manab/LoRAImplement | Manab | 2023-09-27T19:04:08Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-27T19:04:04Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
anniew666/lora-roberta-large-0927 | anniew666 | 2023-09-27T19:03:26Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"region:us"
]
| null | 2023-09-27T11:03:46Z | ---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
model-index:
- name: lora-roberta-large-0927
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora-roberta-large-0927
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5366
- Accuracy: 0.4472
- Prec: 0.2000
- Recall: 0.4472
- F1: 0.2763
- B Acc: 0.1429
- Micro F1: 0.4472
- Prec Joy: 0.0
- Recall Joy: 0.0
- F1 Joy: 0.0
- Prec Anger: 0.0
- Recall Anger: 0.0
- F1 Anger: 0.0
- Prec Disgust: 0.0
- Recall Disgust: 0.0
- F1 Disgust: 0.0
- Prec Fear: 0.0
- Recall Fear: 0.0
- F1 Fear: 0.0
- Prec Neutral: 0.4472
- Recall Neutral: 1.0
- F1 Neutral: 0.6180
- Prec Sadness: 0.0
- Recall Sadness: 0.0
- F1 Sadness: 0.0
- Prec Surprise: 0.0
- Recall Surprise: 0.0
- F1 Surprise: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 25.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Prec | Recall | F1 | B Acc | Micro F1 | Prec Joy | Recall Joy | F1 Joy | Prec Anger | Recall Anger | F1 Anger | Prec Disgust | Recall Disgust | F1 Disgust | Prec Fear | Recall Fear | F1 Fear | Prec Neutral | Recall Neutral | F1 Neutral | Prec Sadness | Recall Sadness | F1 Sadness | Prec Surprise | Recall Surprise | F1 Surprise |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:------:|:------:|:--------:|:--------:|:----------:|:------:|:----------:|:------------:|:--------:|:------------:|:--------------:|:----------:|:---------:|:-----------:|:-------:|:------------:|:--------------:|:----------:|:------------:|:--------------:|:----------:|:-------------:|:---------------:|:-----------:|
| 0.8381 | 1.25 | 2092 | 1.5415 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4866 | 2.5 | 4184 | 1.5564 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4862 | 3.75 | 6276 | 1.5700 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4762 | 5.0 | 8368 | 1.5391 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4765 | 6.25 | 10460 | 1.5566 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4848 | 7.5 | 12552 | 1.5411 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4782 | 8.75 | 14644 | 1.5548 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4943 | 10.0 | 16736 | 1.6115 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4801 | 11.25 | 18828 | 1.5424 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4946 | 12.5 | 20920 | 1.5637 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4867 | 13.75 | 23012 | 1.5492 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4957 | 15.01 | 25104 | 1.5812 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4913 | 16.26 | 27196 | 1.5425 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.5007 | 17.51 | 29288 | 1.5446 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4919 | 18.76 | 31380 | 1.5616 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4895 | 20.01 | 33472 | 1.5502 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4946 | 21.26 | 35564 | 1.5398 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4754 | 22.51 | 37656 | 1.5307 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4824 | 23.76 | 39748 | 1.5356 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
anupamtripathi/oreo_sd_xl | anupamtripathi | 2023-09-27T19:01:52Z | 1 | 2 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-25T22:01:07Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of Oreo biscuits
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
tclopess/bart_samsum | tclopess | 2023-09-27T18:40:33Z | 104 | 4 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-27T18:39:22Z | ---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: bart_samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_samsum
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Subsets and Splits