modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 00:46:34
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 00:44:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Gurusha/dreambooth_holding_glass | Gurusha | 2023-09-20T11:15:54Z | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-09-20T09:53:59Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a human sks hand holding an glass filled with a drink
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Gurusha/dreambooth_holding_glass
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a human sks hand holding an glass filled with a drink using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
dim/ruGPT-13b_dolly_oasst1_chip2 | dim | 2023-09-20T11:03:21Z | 3 | 1 | peft | [
"peft",
"region:us"
] | null | 2023-09-20T11:00:43Z | ---
library_name: peft
---
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
from peft import PeftModel, PeftConfig
class GoralConversation:
def __init__(
self,
message_template=" <s> {role}\n{content} </s>\n",
system_prompt="Ты — Горал, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.",
start_token_id=1,
bot_token_id=9225,
):
self.message_template = message_template
self.start_token_id = start_token_id
self.bot_token_id = bot_token_id
self.messages = [{"role": "system", "content": system_prompt}]
def get_start_token_id(self):
return self.start_token_id
def get_bot_token_id(self):
return self.bot_token_id
def add_user_message(self, message):
self.messages.append({"role": "user", "content": message})
def add_bot_message(self, message):
self.messages.append({"role": "bot", "content": message})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += tokenizer.decode(
[
self.start_token_id,
]
)
final_text += " "
final_text += tokenizer.decode([self.bot_token_id])
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(
prompt,
return_tensors="pt",
truncation=True,
max_length=2048,
)
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(**data, generation_config=generation_config)[0]
output_ids = output_ids[len(data["input_ids"][0]) :]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
weights_path = "dim/ruGPT-13b_dolly_oasst1_chip2"
access_token = ""
config = PeftConfig.from_pretrained(weights_path)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map={"": 0},
token=access_token,
)
model = PeftModel.from_pretrained(
model,
weights_path,
torch_dtype=torch.float16,
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(weights_path)
generation_config = GenerationConfig.from_pretrained(weights_path)
generation_config.do_sample = False
inp = "Напишите интересный пост в блоге о недавней поездке на Гавайи, рассказывая о культурном опыте и достопримечательностях, которые обязательно нужно увидеть."
conversation = GoralConversation(
start_token_id=2,
bot_token_id=46787,
)
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
# Гавайские острова - это рай для любителей природы, культуры и приключений. Это место, где можно исследовать тропические леса, плавать с дельфинами, кататься на волнах прибоя или просто наслаждаться солнцем и пляжем. В дополнение к этому, здесь есть множество достопримечательностей, которые стоит посетить, таких как вулканы Мауна-Лоа и Килауэа, водопады Ваймеа, пещеры Халеакала и многое другое. Если вы ищете что-то более расслабляющее, то посещение гавайских ресторанов и баров может быть отличным способом провести время.
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
tvganesh/test_trainer1 | tvganesh | 2023-09-20T10:58:06Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-20T10:54:37Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test_trainer1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Rouge1: 0.8111
- Rouge2: 0.8008
- Rougel: 0.812
- Rougelsum: 0.8109
- Gen Len: 18.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0056
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 13 | 0.3042 | 0.7518 | 0.7064 | 0.7515 | 0.7499 | 18.2 |
| No log | 2.0 | 26 | 0.0621 | 0.7853 | 0.7648 | 0.7778 | 0.778 | 18.4667 |
| No log | 3.0 | 39 | 0.0600 | 0.7809 | 0.7539 | 0.7793 | 0.7794 | 18.3333 |
| No log | 4.0 | 52 | 0.0293 | 0.8073 | 0.7961 | 0.8076 | 0.8069 | 18.4 |
| No log | 5.0 | 65 | 0.0304 | 0.8053 | 0.7881 | 0.803 | 0.8027 | 18.4667 |
| No log | 6.0 | 78 | 0.0167 | 0.7787 | 0.7634 | 0.7794 | 0.7792 | 18.7 |
| No log | 7.0 | 91 | 0.0203 | 0.8076 | 0.7952 | 0.8083 | 0.8072 | 18.5333 |
| No log | 8.0 | 104 | 0.0418 | 0.7722 | 0.7493 | 0.7711 | 0.7695 | 18.7667 |
| No log | 9.0 | 117 | 0.0153 | 0.799 | 0.7804 | 0.7969 | 0.7964 | 18.4 |
| No log | 10.0 | 130 | 0.0225 | 0.7963 | 0.7804 | 0.7968 | 0.7952 | 18.5 |
| No log | 11.0 | 143 | 0.0119 | 0.7832 | 0.7676 | 0.784 | 0.7837 | 18.5 |
| No log | 12.0 | 156 | 0.0118 | 0.8023 | 0.7863 | 0.8024 | 0.8011 | 18.5 |
| No log | 13.0 | 169 | 0.0411 | 0.8019 | 0.7916 | 0.8034 | 0.8025 | 18.2667 |
| No log | 14.0 | 182 | 0.0048 | 0.8017 | 0.791 | 0.8029 | 0.8022 | 18.5 |
| No log | 15.0 | 195 | 0.0038 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 16.0 | 208 | 0.0080 | 0.8091 | 0.7967 | 0.8093 | 0.8086 | 18.5 |
| No log | 17.0 | 221 | 0.0046 | 0.8092 | 0.7967 | 0.8103 | 0.8095 | 18.5 |
| No log | 18.0 | 234 | 0.0023 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 19.0 | 247 | 0.0097 | 0.8105 | 0.799 | 0.8116 | 0.8105 | 18.5 |
| No log | 20.0 | 260 | 0.0024 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 21.0 | 273 | 0.0018 | 0.8111 | 0.7995 | 0.812 | 0.8109 | 18.5 |
| No log | 22.0 | 286 | 0.0030 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 23.0 | 299 | 0.0042 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 24.0 | 312 | 0.0065 | 0.8102 | 0.8 | 0.8114 | 0.8099 | 18.5 |
| No log | 25.0 | 325 | 0.0004 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 26.0 | 338 | 0.0001 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 27.0 | 351 | 0.0001 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 28.0 | 364 | 0.0010 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 29.0 | 377 | 0.0002 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 30.0 | 390 | 0.0001 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 31.0 | 403 | 0.0020 | 0.8093 | 0.7975 | 0.8103 | 0.8089 | 18.5 |
| No log | 32.0 | 416 | 0.0014 | 0.8093 | 0.7975 | 0.8103 | 0.8089 | 18.5 |
| No log | 33.0 | 429 | 0.0001 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 34.0 | 442 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 35.0 | 455 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 36.0 | 468 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 37.0 | 481 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| No log | 38.0 | 494 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| 0.068 | 39.0 | 507 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
| 0.068 | 40.0 | 520 | 0.0000 | 0.8111 | 0.8008 | 0.812 | 0.8109 | 18.5 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
KennyFromPH/CorsetPantyAndStockingRVC | KennyFromPH | 2023-09-20T10:45:46Z | 0 | 0 | null | [
"region:us"
] | null | 2023-09-20T07:58:00Z | ---
license: openrail
trained him via RVC v2 Disconnected
script at 256 epochs. |
CyberHarem/pekora_jashinchandropkick | CyberHarem | 2023-09-20T10:41:30Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/pekora_jashinchandropkick",
"license:mit",
"region:us"
] | text-to-image | 2023-09-20T10:23:18Z | ---
license: mit
datasets:
- CyberHarem/pekora_jashinchandropkick
pipeline_tag: text-to-image
tags:
- art
---
# Lora of pekora_jashinchandropkick
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 8960, you need to download `8960/pekora_jashinchandropkick.pt` as the embedding and `8960/pekora_jashinchandropkick.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 8960**, with the score of 0.891. The trigger words are:
1. `pekora_jashinchandropkick`
2. `short_hair, hairband, blonde_hair, black_hairband, capelet, shaded_face, shiny_hair, hair_between_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:---------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9600 | 0.883 | [Download](9600/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9600/previews/nude.png) | [<NSFW, click to see>](9600/previews/nude2.png) |  |  |
| **8960** | **0.891** | [**Download**](8960/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8960/previews/nude.png) | [<NSFW, click to see>](8960/previews/nude2.png) |  |  |
| 8320 | 0.861 | [Download](8320/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8320/previews/nude.png) | [<NSFW, click to see>](8320/previews/nude2.png) |  |  |
| 7680 | 0.877 | [Download](7680/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7680/previews/nude.png) | [<NSFW, click to see>](7680/previews/nude2.png) |  |  |
| 7040 | 0.875 | [Download](7040/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7040/previews/nude.png) | [<NSFW, click to see>](7040/previews/nude2.png) |  |  |
| 6400 | 0.865 | [Download](6400/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6400/previews/nude.png) | [<NSFW, click to see>](6400/previews/nude2.png) |  |  |
| 5760 | 0.841 | [Download](5760/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5120 | 0.773 | [Download](5120/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5120/previews/nude.png) | [<NSFW, click to see>](5120/previews/nude2.png) |  |  |
| 4480 | 0.767 | [Download](4480/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4480/previews/nude.png) | [<NSFW, click to see>](4480/previews/nude2.png) |  |  |
| 3840 | 0.798 | [Download](3840/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3200 | 0.714 | [Download](3200/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3200/previews/nude.png) | [<NSFW, click to see>](3200/previews/nude2.png) |  |  |
| 2560 | 0.701 | [Download](2560/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2560/previews/nude.png) | [<NSFW, click to see>](2560/previews/nude2.png) |  |  |
| 1920 | 0.717 | [Download](1920/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1280 | 0.614 | [Download](1280/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1280/previews/nude.png) | [<NSFW, click to see>](1280/previews/nude2.png) |  |  |
| 640 | 0.485 | [Download](640/pekora_jashinchandropkick.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](640/previews/nude.png) | [<NSFW, click to see>](640/previews/nude2.png) |  |  |
|
dim/xglm-4.5b_dolly_oasst1_chip2 | dim | 2023-09-20T10:38:18Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-20T10:37:14Z | ---
library_name: peft
---
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
from peft import PeftModel, PeftConfig
class GoralConversation:
def __init__(
self,
message_template=" <s> {role}\n{content} </s>\n",
system_prompt="Ты — Горал, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.",
start_token_id=1,
bot_token_id=9225,
):
self.message_template = message_template
self.start_token_id = start_token_id
self.bot_token_id = bot_token_id
self.messages = [{"role": "system", "content": system_prompt}]
def get_start_token_id(self):
return self.start_token_id
def get_bot_token_id(self):
return self.bot_token_id
def add_user_message(self, message):
self.messages.append({"role": "user", "content": message})
def add_bot_message(self, message):
self.messages.append({"role": "bot", "content": message})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += tokenizer.decode(
[
self.start_token_id,
]
)
final_text += " "
final_text += tokenizer.decode([self.bot_token_id])
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(
prompt,
return_tensors="pt",
truncation=True,
max_length=2048,
)
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(**data, generation_config=generation_config)[0]
output_ids = output_ids[len(data["input_ids"][0]) :]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
weights_path = "dim/xglm-4.5b_dolly_oasst1_chip2"
access_token = ""
config = PeftConfig.from_pretrained(weights_path)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map={"": 0},
token=access_token,
)
model = PeftModel.from_pretrained(
model,
weights_path,
torch_dtype=torch.float16,
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(weights_path)
generation_config = GenerationConfig.from_pretrained(weights_path)
generation_config.do_sample = False
inp = "Напишите интересный пост в блоге о недавней поездке на Гавайи, рассказывая о культурном опыте и достопримечательностях, которые обязательно нужно увидеть."
conversation = GoralConversation(
start_token_id=0,
bot_token_id=7425,
)
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
# Я был там! Это было незабываемое путешествие, которое я никогда не забуду. Мы посетили все основные достопримечательности острова, включая пляжи, вулканы, пещеры, национальные парки и многое другое. Впечатления от посещения были потрясающими, а культура - уникальной. Поездка была отличным способом исследовать остров и узнать больше об истории его жителей. Надеюсь, что вы также захотите посетить это место!
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
DuyTa/Vietnamese_ASR | DuyTa | 2023-09-20T10:36:20Z | 4 | 1 | peft | [
"peft",
"tensorboard",
"base_model:openai/whisper-medium",
"base_model:adapter:openai/whisper-medium",
"region:us"
] | null | 2023-08-17T14:41:26Z | ---
library_name: peft
base_model: openai/whisper-medium
---
## Training procedure
The following `
bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
dim/llama2_13b_dolly_oasst1_chip2 | dim | 2023-09-20T10:21:54Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2023-08-08T15:09:20Z | ---
license: mit
---
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
from peft import PeftModel, PeftConfig
class GoralConversation:
def __init__(
self,
message_template=" <s> {role}\n{content} </s>\n",
system_prompt="Ты — Горал, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.",
start_token_id=1,
bot_token_id=9225,
):
self.message_template = message_template
self.start_token_id = start_token_id
self.bot_token_id = bot_token_id
self.messages = [{"role": "system", "content": system_prompt}]
def get_start_token_id(self):
return self.start_token_id
def get_bot_token_id(self):
return self.bot_token_id
def add_user_message(self, message):
self.messages.append({"role": "user", "content": message})
def add_bot_message(self, message):
self.messages.append({"role": "bot", "content": message})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += tokenizer.decode(
[
self.start_token_id,
]
)
final_text += " "
final_text += tokenizer.decode([self.bot_token_id])
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(
prompt,
return_tensors="pt",
truncation=True,
max_length=2048,
)
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(**data, generation_config=generation_config)[0]
output_ids = output_ids[len(data["input_ids"][0]) :]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
weights_path = "dim/llama2_13b_dolly_oasst1_chip2"
access_token = ""
config = PeftConfig.from_pretrained(weights_path)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map={"": 0},
token=access_token,
)
model = PeftModel.from_pretrained(
model,
weights_path,
torch_dtype=torch.float16,
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(weights_path)
generation_config = GenerationConfig.from_pretrained(weights_path)
generation_config.do_sample = False
inp = "Напишите интересный пост в блоге о недавней поездке на Гавайи, рассказывая о культурном опыте и достопримечательностях, которые обязательно нужно увидеть."
conversation = GoralConversation(
start_token_id=1,
bot_token_id=9225,
)
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
``` |
CyberHarem/murakami_tomoe_idolmastercinderellagirls | CyberHarem | 2023-09-20T10:17:14Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/murakami_tomoe_idolmastercinderellagirls",
"license:mit",
"region:us"
] | text-to-image | 2023-09-20T10:02:22Z | ---
license: mit
datasets:
- CyberHarem/murakami_tomoe_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of murakami_tomoe_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3060, you need to download `3060/murakami_tomoe_idolmastercinderellagirls.pt` as the embedding and `3060/murakami_tomoe_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3060**, with the score of 0.942. The trigger words are:
1. `murakami_tomoe_idolmastercinderellagirls`
2. `red_hair, short_hair, blush, brown_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.895 | [Download](5100/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](5100/previews/pattern_8.png) | [<NSFW, click to see>](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.920 | [Download](4760/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/pattern_8.png) | [<NSFW, click to see>](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.869 | [Download](4420/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4420/previews/pattern_8.png) | [<NSFW, click to see>](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.932 | [Download](4080/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/pattern_8.png) | [<NSFW, click to see>](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.909 | [Download](3740/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3740/previews/pattern_8.png) | [<NSFW, click to see>](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.915 | [Download](3400/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/pattern_8.png) | [<NSFW, click to see>](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| **3060** | **0.942** | [**Download**](3060/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](3060/previews/pattern_8.png) | [<NSFW, click to see>](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.883 | [Download](2720/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/pattern_8.png) | [<NSFW, click to see>](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.915 | [Download](2380/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2380/previews/pattern_8.png) | [<NSFW, click to see>](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.905 | [Download](2040/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/pattern_8.png) | [<NSFW, click to see>](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.887 | [Download](1700/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1700/previews/pattern_8.png) | [<NSFW, click to see>](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.885 | [Download](1360/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/pattern_8.png) | [<NSFW, click to see>](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.860 | [Download](1020/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](1020/previews/pattern_8.png) | [<NSFW, click to see>](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.756 | [Download](680/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/pattern_8.png) | [<NSFW, click to see>](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.731 | [Download](340/murakami_tomoe_idolmastercinderellagirls.zip) |  |  |  |  |  |  |  | [<NSFW, click to see>](340/previews/pattern_8.png) | [<NSFW, click to see>](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
Kendong/bc_flowers | Kendong | 2023-09-20T10:12:39Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-09-19T13:18:57Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photograph of a BSC vase of ASK flowers on a wooden table
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Kendong/bc_flowers
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photograph of a BSC vase of ASK flowers on a wooden table using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
zhuwch/all-MiniLM-L6-v2 | zhuwch | 2023-09-20T10:07:25Z | 11 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-09-20T07:37:02Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
---
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 256 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** | |
Alexzyx/lora-trained-xl-colab-v3 | Alexzyx | 2023-09-20T09:59:18Z | 7 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-09-20T06:19:56Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Alexzyx/lora-trained-xl-colab-v3
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
swaroopajit/git-base-fashion | swaroopajit | 2023-09-20T09:58:46Z | 170 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"git",
"image-text-to-text",
"generated_from_trainer",
"image-to-text",
"base_model:microsoft/git-base",
"base_model:finetune:microsoft/git-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | 2023-09-01T08:19:38Z | ---
license: mit
base_model: microsoft/git-base
tags:
- generated_from_trainer
model-index:
- name: git-base-fashion
results: []
pipeline_tag: image-to-text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-fashion
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3 |
Technotech/sd-prompt-instruct-3b-epoch-0.4-lora | Technotech | 2023-09-20T09:54:02Z | 2 | 0 | peft | [
"peft",
"stable-diffusion",
"instruct",
"magic-prompt",
"en",
"dataset:Technotech/sd-prompt-instruct",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:adapter:openlm-research/open_llama_3b_v2",
"license:apache-2.0",
"region:us"
] | null | 2023-07-29T15:14:28Z | ---
language:
- en
license: apache-2.0
library_name: peft
tags:
- stable-diffusion
- instruct
- magic-prompt
datasets:
- Technotech/sd-prompt-instruct
base_model: openlm-research/open_llama_3b_v2
---
# Stable Diffusion Prompt Instruct 3B (OpenLlama v2 3B)
Trained for 0.4 epochs (test) on [Technotech/sd-prompt-instruct](https://huggingface.co/datasets/Technotech/sd-prompt-instruct).
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0 |
SHENMU007/neunit_BASE_V9.5.14 | SHENMU007 | 2023-09-20T09:53:21Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-09-20T08:19:51Z | ---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
phamhungd/GuoZovya | phamhungd | 2023-09-20T09:52:42Z | 0 | 0 | null | [
"onnx",
"vi",
"en",
"license:unknown",
"region:us"
] | null | 2023-02-28T05:44:19Z | ---
license: unknown
language:
- vi
- en
---
About this version
Model Merge :
A to Zovya RPG Artist's Tools
GuoFeng2
#Note: Đây là bản mix 5-5 từ 2 model yêu thích nhất của mình, hiệu quả tạo ra thực sự tuyệt vời, các gương mặt mẫu gần như là hoàn hảo với thông số mặc định.Nó gần như hoàn hảo với đầy đủ các phong cách khác.Với model này, gần như mình đã không cần phải suy nghĩ tới model khac để thực hiện nhiều bức hình với các thể loại khác nhau. Mong mọi người có thể ủng hộ và khám phá hết tiềm năng của model này.
#Note : This is a 5-5 mix from my 2 favorite models, the effect is really great, the model faces are almost perfect with default settings. It's almost perfect with a full range of styles. another way. With this model, I almost didn't need to think about other models to take many pictures with different genres. Hope everyone can support and explore the full potential of this model.
|
hosnasn/hosna_DB | hosnasn | 2023-09-20T09:32:39Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-09-20T09:32:37Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks hosna
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
CyberHarem/makihara_shiho_idolmastercinderellagirls | CyberHarem | 2023-09-20T09:27:49Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/makihara_shiho_idolmastercinderellagirls",
"license:mit",
"region:us"
] | text-to-image | 2023-09-20T09:15:47Z | ---
license: mit
datasets:
- CyberHarem/makihara_shiho_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of makihara_shiho_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 4420, you need to download `4420/makihara_shiho_idolmastercinderellagirls.pt` as the embedding and `4420/makihara_shiho_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 4420**, with the score of 0.882. The trigger words are:
1. `makihara_shiho_idolmastercinderellagirls`
2. `brown_hair, long_hair, green_eyes, smile, open_mouth, blush, bow, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.859 | [Download](5100/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](5100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.861 | [Download](4760/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| **4420** | **0.882** | [**Download**](4420/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.856 | [Download](4080/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3740 | 0.858 | [Download](3740/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3740/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.847 | [Download](3400/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.795 | [Download](3060/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](3060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.735 | [Download](2720/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.781 | [Download](2380/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.816 | [Download](2040/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.798 | [Download](1700/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.797 | [Download](1360/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.670 | [Download](1020/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](1020/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.649 | [Download](680/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.455 | [Download](340/makihara_shiho_idolmastercinderellagirls.zip) |  |  |  |  | [<NSFW, click to see>](340/previews/bondage.png) |  |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
ecnu-icalk/educhat-sft-002-13b-baichuan | ecnu-icalk | 2023-09-20T09:25:46Z | 109 | 3 | transformers | [
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-31T08:00:07Z | 本模型从[educhat-base-002-13b-baichuan](https://huggingface.co/ecnu-icalk/educhat-base-002-13b-baichuan)训练得到。
使用时:
1. 先将该仓库clone到本地
2. 然后将[百度网盘](https://pan.baidu.com/s/1j49404ZIE7NF33JFhnYpBw?pwd=bqvc)中的权重放进第1步clone的文件夹中。
模型使用方法参考百川模型。
|
ktadzjibov/opus-mt-en-ru-finetuned-en-to-ru-amls | ktadzjibov | 2023-09-20T09:19:34Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"base_model:Helsinki-NLP/opus-mt-en-ru",
"base_model:finetune:Helsinki-NLP/opus-mt-en-ru",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-16T21:24:50Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ru
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ru-finetuned-en-to-ru-amls
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
config: ru-en
split: validation
args: ru-en
metrics:
- name: Bleu
type: bleu
value: 28.7642
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ru-finetuned-en-to-ru-amls
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2832
- Bleu: 28.7642
- Gen Len: 29.495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.9806 | 1.0 | 94761 | 1.2832 | 28.7642 | 29.495 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kligvasser/ffhq-vqvae-clip-landmark-arcface | kligvasser | 2023-09-20T09:18:41Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"diffusers:ConditionalDDPMPipeline",
"region:us"
] | null | 2023-09-14T07:41:15Z | Please refer [facial-diffusion](https://github.com/kligvasser/facial-diffusion) for evaluation and training examples.
Condition:

Sample:
 |
kligvasser/ffhq-vqvae-attr | kligvasser | 2023-09-20T09:13:10Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"diffusers:ConditionalDDPMPipeline",
"region:us"
] | null | 2023-09-13T11:51:13Z | Please refer [facial-diffusion](https://github.com/kligvasser/facial-diffusion) for evaluation and training examples.
Condition:

Sample:

Fixed condition:
 |
MattyB95/pre_trained_DF_RawNet2 | MattyB95 | 2023-09-20T09:09:36Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2023-09-17T22:03:34Z | ---
license: mit
---
## Citation
If you use this code in your research please use the following citation:
```bibtex
@INPROCEEDINGS{9414234,
author={Tak, Hemlata and Patino, Jose and Todisco, Massimiliano and Nautsch, Andreas and Evans, Nicholas and Larcher, Anthony},
booktitle={IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={End-to-End anti-spoofing with RawNet2},
year={2021},
pages={6369-6373}
}
```
|
TheBloke/LLaMA-13b-GGUF | TheBloke | 2023-09-20T09:03:17Z | 154 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"license:other",
"region:us"
] | null | 2023-09-20T02:19:56Z | ---
base_model: https://ai.meta.com/blog/large-language-model-llama-meta-ai
inference: false
license: other
model_creator: Meta
model_name: LLaMA 13B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# LLaMA 13B - GGUF
- Model creator: [Meta](https://huggingface.co/none)
- Original model: [LLaMA 13B](https://ai.meta.com/blog/large-language-model-llama-meta-ai)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Meta's LLaMA 13b](https://ai.meta.com/blog/large-language-model-llama-meta-ai).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LLaMA-13b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-13b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-13b-GGUF)
* [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/huggyllama/llama-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [llama-13b.Q2_K.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [llama-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [llama-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [llama-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [llama-13b.Q4_0.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llama-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [llama-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [llama-13b.Q5_0.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llama-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [llama-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [llama-13b.Q6_K.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [llama-13b.Q8_0.gguf](https://huggingface.co/TheBloke/LLaMA-13b-GGUF/blob/main/llama-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/LLaMA-13b-GGUF and below it, a specific filename to download, such as: llama-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/LLaMA-13b-GGUF llama-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/LLaMA-13b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLaMA-13b-GGUF llama-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m llama-13b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/LLaMA-13b-GGUF", model_file="llama-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Meta's LLaMA 13b
This contains the weights for the LLaMA-13b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
<!-- original-model-card end -->
|
kligvasser/ffhq-vqvae-landmark | kligvasser | 2023-09-20T09:01:39Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"diffusers:ConditionalDDPMPipeline",
"region:us"
] | null | 2023-09-13T12:30:59Z | Please refer [facial-diffusion](https://github.com/kligvasser/facial-diffusion) for evaluation and training examples.
Condition:

Samples:

Fixed latent:

|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-19_baseline_gates_exitlosses | jordyvl | 2023-09-20T09:00:20Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-19T18:52:55Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-19_baseline_gates_exitlosses
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-19_baseline_gates_exitlosses
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4148
- Accuracy: 0.69
- Exit 0 Accuracy: 0.0625
- Exit 1 Accuracy: 0.0625
- Exit 2 Accuracy: 0.0625
- Exit 3 Accuracy: 0.0625
- Exit 4 Accuracy: 0.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 18
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 12
- total_train_batch_size: 216
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.8 | 3 | 5.9533 | 0.1025 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 1.8 | 6 | 5.6136 | 0.1375 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 2.8 | 9 | 5.3608 | 0.195 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 3.8 | 12 | 5.0940 | 0.2075 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 4.8 | 15 | 4.8476 | 0.2325 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 5.8 | 18 | 4.5730 | 0.18 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 6.8 | 21 | 4.3736 | 0.2 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 7.8 | 24 | 4.1419 | 0.24 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 8.8 | 27 | 4.0632 | 0.2575 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 9.8 | 30 | 3.8993 | 0.2725 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 10.8 | 33 | 3.7832 | 0.2725 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 11.8 | 36 | 3.6698 | 0.3175 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 12.8 | 39 | 3.5906 | 0.3375 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 13.8 | 42 | 3.5428 | 0.3825 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 14.8 | 45 | 3.4243 | 0.4075 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 15.8 | 48 | 3.3143 | 0.4475 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 16.8 | 51 | 3.2227 | 0.465 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 17.8 | 54 | 3.1765 | 0.4775 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 18.8 | 57 | 3.0634 | 0.51 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 19.8 | 60 | 3.0335 | 0.5125 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 20.8 | 63 | 2.9393 | 0.5625 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 21.8 | 66 | 2.9017 | 0.58 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 22.8 | 69 | 2.8951 | 0.565 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 23.8 | 72 | 2.8495 | 0.6 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 24.8 | 75 | 2.7970 | 0.6075 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 25.8 | 78 | 2.7476 | 0.615 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 26.8 | 81 | 2.7269 | 0.625 | 0.065 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 27.8 | 84 | 2.7068 | 0.615 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 28.8 | 87 | 2.7034 | 0.6375 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 29.8 | 90 | 2.6356 | 0.66 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 30.8 | 93 | 2.6201 | 0.6525 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 31.8 | 96 | 2.5855 | 0.66 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 32.8 | 99 | 2.5871 | 0.6575 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 33.8 | 102 | 2.5959 | 0.665 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 34.8 | 105 | 2.5538 | 0.6625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 35.8 | 108 | 2.5488 | 0.65 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 36.8 | 111 | 2.5309 | 0.675 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 37.8 | 114 | 2.5034 | 0.6825 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 38.8 | 117 | 2.5297 | 0.6625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 39.8 | 120 | 2.4822 | 0.6725 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 40.8 | 123 | 2.5056 | 0.66 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 41.8 | 126 | 2.4856 | 0.68 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 42.8 | 129 | 2.4918 | 0.67 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 43.8 | 132 | 2.4635 | 0.69 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 44.8 | 135 | 2.4477 | 0.68 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 45.8 | 138 | 2.4668 | 0.68 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 46.8 | 141 | 2.4412 | 0.685 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 47.8 | 144 | 2.4477 | 0.6825 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 48.8 | 147 | 2.4581 | 0.68 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 49.8 | 150 | 2.4446 | 0.6875 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 50.8 | 153 | 2.4598 | 0.685 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 51.8 | 156 | 2.4323 | 0.69 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 52.8 | 159 | 2.4148 | 0.69 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 53.8 | 162 | 2.4228 | 0.6925 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 54.8 | 165 | 2.4136 | 0.695 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 55.8 | 168 | 2.4112 | 0.69 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 56.8 | 171 | 2.4135 | 0.69 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 57.8 | 174 | 2.4219 | 0.69 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 58.8 | 177 | 2.4158 | 0.69 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| No log | 59.8 | 180 | 2.4148 | 0.69 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
mohsen2/my-snappfood-model | mohsen2 | 2023-09-20T08:53:14Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:HooshvareLab/bert-fa-base-uncased-sentiment-snappfood",
"base_model:finetune:HooshvareLab/bert-fa-base-uncased-sentiment-snappfood",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-20T05:48:43Z | ---
license: apache-2.0
base_model: HooshvareLab/bert-fa-base-uncased-sentiment-snappfood
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: my-snappfood-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-snappfood-model
This model is a fine-tuned version of [HooshvareLab/bert-fa-base-uncased-sentiment-snappfood](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-sentiment-snappfood) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2433
- Accuracy: 0.8613
- F1: 0.8613
- Precision: 0.8615
- Recall: 0.8613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2358 | 1.0 | 2363 | 0.3235 | 0.869 | 0.8690 | 0.8691 | 0.869 |
| 0.1925 | 2.0 | 4726 | 0.3717 | 0.855 | 0.8550 | 0.8553 | 0.855 |
| 0.1423 | 3.0 | 7089 | 0.5230 | 0.867 | 0.8669 | 0.8683 | 0.867 |
| 0.1135 | 4.0 | 9452 | 0.6233 | 0.8691 | 0.8690 | 0.8709 | 0.8691 |
| 0.0876 | 5.0 | 11815 | 0.7637 | 0.8636 | 0.8635 | 0.8644 | 0.8636 |
| 0.063 | 6.0 | 14178 | 0.8685 | 0.8544 | 0.8544 | 0.8547 | 0.8544 |
| 0.0435 | 7.0 | 16541 | 0.9789 | 0.8607 | 0.8606 | 0.8616 | 0.8607 |
| 0.0279 | 8.0 | 18904 | 1.1560 | 0.8579 | 0.8578 | 0.8579 | 0.8579 |
| 0.0184 | 9.0 | 21267 | 1.1904 | 0.8653 | 0.8652 | 0.8659 | 0.8653 |
| 0.0092 | 10.0 | 23630 | 1.2433 | 0.8613 | 0.8613 | 0.8615 | 0.8613 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ShivamMangale/XLM-Roberta-base-finetuned-squad-syn-first-ent-gen-syn | ShivamMangale | 2023-09-20T08:47:11Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-09-20T08:31:51Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: XLM-Roberta-base-finetuned-squad-syn-first-ent-gen-syn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-Roberta-base-finetuned-squad-syn-first-ent-gen-syn
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
gianclbal/my_awesome_model | gianclbal | 2023-09-20T08:46:18Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-19T14:06:41Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1770
- Accuracy: 0.9366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 80 | 0.2455 | 0.8239 |
| No log | 2.0 | 160 | 0.1770 | 0.9366 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.1.0.dev20230901
- Datasets 2.14.5
- Tokenizers 0.11.0
|
isashap/clmmasking-nolabelmask-waldomodel | isashap | 2023-09-20T08:44:54Z | 31 | 0 | peft | [
"peft",
"text-generation",
"region:us"
] | text-generation | 2023-09-20T08:43:31Z | ---
library_name: peft
pipeline_tag: text-generation
--- |
CyberHarem/doumyouji_karin_idolmastercinderellagirls | CyberHarem | 2023-09-20T08:39:00Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/doumyouji_karin_idolmastercinderellagirls",
"license:mit",
"region:us"
] | text-to-image | 2023-09-20T08:25:48Z | ---
license: mit
datasets:
- CyberHarem/doumyouji_karin_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of doumyouji_karin_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3740, you need to download `3740/doumyouji_karin_idolmastercinderellagirls.pt` as the embedding and `3740/doumyouji_karin_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3740**, with the score of 0.762. The trigger words are:
1. `doumyouji_karin_idolmastercinderellagirls`
2. `brown_hair, short_hair, blush, open_mouth, brown_eyes, smile, red_eyes, breasts, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5100 | 0.676 | [Download](5100/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](5100/previews/pattern_4.png) |  |  | [<NSFW, click to see>](5100/previews/bondage.png) | [<NSFW, click to see>](5100/previews/free.png) |  |  | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) |  |  |
| 4760 | 0.665 | [Download](4760/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4760/previews/pattern_4.png) |  |  | [<NSFW, click to see>](4760/previews/bondage.png) | [<NSFW, click to see>](4760/previews/free.png) |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4420 | 0.762 | [Download](4420/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4420/previews/pattern_4.png) |  |  | [<NSFW, click to see>](4420/previews/bondage.png) | [<NSFW, click to see>](4420/previews/free.png) |  |  | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) |  |  |
| 4080 | 0.727 | [Download](4080/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](4080/previews/pattern_4.png) |  |  | [<NSFW, click to see>](4080/previews/bondage.png) | [<NSFW, click to see>](4080/previews/free.png) |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| **3740** | **0.762** | [**Download**](3740/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3740/previews/pattern_4.png) |  |  | [<NSFW, click to see>](3740/previews/bondage.png) | [<NSFW, click to see>](3740/previews/free.png) |  |  | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) |  |  |
| 3400 | 0.681 | [Download](3400/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3400/previews/pattern_4.png) |  |  | [<NSFW, click to see>](3400/previews/bondage.png) | [<NSFW, click to see>](3400/previews/free.png) |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 3060 | 0.683 | [Download](3060/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](3060/previews/pattern_4.png) |  |  | [<NSFW, click to see>](3060/previews/bondage.png) | [<NSFW, click to see>](3060/previews/free.png) |  |  | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) |  |  |
| 2720 | 0.616 | [Download](2720/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2720/previews/pattern_4.png) |  |  | [<NSFW, click to see>](2720/previews/bondage.png) | [<NSFW, click to see>](2720/previews/free.png) |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2380 | 0.614 | [Download](2380/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2380/previews/pattern_4.png) |  |  | [<NSFW, click to see>](2380/previews/bondage.png) | [<NSFW, click to see>](2380/previews/free.png) |  |  | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) |  |  |
| 2040 | 0.627 | [Download](2040/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](2040/previews/pattern_4.png) |  |  | [<NSFW, click to see>](2040/previews/bondage.png) | [<NSFW, click to see>](2040/previews/free.png) |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1700 | 0.589 | [Download](1700/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1700/previews/pattern_4.png) |  |  | [<NSFW, click to see>](1700/previews/bondage.png) | [<NSFW, click to see>](1700/previews/free.png) |  |  | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) |  |  |
| 1360 | 0.483 | [Download](1360/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1360/previews/pattern_4.png) |  |  | [<NSFW, click to see>](1360/previews/bondage.png) | [<NSFW, click to see>](1360/previews/free.png) |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 1020 | 0.204 | [Download](1020/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](1020/previews/pattern_4.png) |  |  | [<NSFW, click to see>](1020/previews/bondage.png) | [<NSFW, click to see>](1020/previews/free.png) |  |  | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) |  |  |
| 680 | 0.236 | [Download](680/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](680/previews/pattern_4.png) |  |  | [<NSFW, click to see>](680/previews/bondage.png) | [<NSFW, click to see>](680/previews/free.png) |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
| 340 | 0.152 | [Download](340/doumyouji_karin_idolmastercinderellagirls.zip) |  |  |  | [<NSFW, click to see>](340/previews/pattern_4.png) |  |  | [<NSFW, click to see>](340/previews/bondage.png) | [<NSFW, click to see>](340/previews/free.png) |  |  | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) |  |  |
|
TheBloke/LLaMa-30B-GPTQ | TheBloke | 2023-09-20T08:28:25Z | 267 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2023-07-15T17:29:50Z | ---
base_model: https://ai.meta.com/blog/large-language-model-llama-meta-ai
inference: false
license: other
model_creator: Meta
model_name: LLaMA 33B
model_type: llama
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# LLaMA 33B - GPTQ
- Model creator: [Meta](https://huggingface.co/none)
- Original model: [LLaMA 33B](https://ai.meta.com/blog/large-language-model-llama-meta-ai)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Meta's LLaMA 30b](https://ai.meta.com/blog/large-language-model-llama-meta-ai).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LLaMA-30b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-30b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-30b-GGUF)
* [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/huggyllama/llama-30b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/LLaMA-30b-GPTQ/tree/main) | 4 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 16.94 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/LLaMA-30b-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 19.44 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/LLaMA-30b-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 18.18 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/LLaMA-30b-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 17.55 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/LLaMA-30b-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 32.99 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/LLaMA-30b-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 33.73 GB | No | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/LLaMA-30b-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 12.92 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_False](https://huggingface.co/TheBloke/LLaMA-30b-GPTQ/tree/gptq-3bit-128g-actorder_False) | 3 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 13.51 GB | No | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/LLaMA-30b-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/LLaMA-30b-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/LLaMA-30b-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/LLaMA-30b-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `LLaMA-30b-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/LLaMA-30b-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Meta's LLaMA 30b
This contains the weights for the LLaMA-30b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
|
tobiasc/segformer-b3-finetuned-segments-sidewalk | tobiasc | 2023-09-20T08:14:38Z | 189 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"segformer",
"generated_from_trainer",
"vision",
"image-segmentation",
"dataset:segments/sidewalk-semantic",
"base_model:nvidia/mit-b3",
"base_model:finetune:nvidia/mit-b3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2022-03-03T17:19:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
base_model: nvidia/mit-b3
model-index:
- name: segformer-b3-finetuned-segments-sidewalk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b3-finetuned-segments-sidewalk
This model is a fine-tuned version of [nvidia/mit-b3](https://huggingface.co/nvidia/mit-b3) on the [`sidewalk-semantic`](https://huggingface.co/datasets/segments/sidewalk-semantic) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8527
- Miou: 0.4345
- Macc: 0.5079
- Overall Accuracy: 0.8871
- Per Category Iou: [nan, 0.8382620833593052, 0.8876413942052827, 0.6261839847460975, 0.6590417473673477, 0.48228357004057837, 0.0, 0.6202905105623743, 0.748344409080285, 0.39096811362981676, 0.8848513296576286, 0.2415092028297553, 0.0, 0.07068982339740462, 0.41356382978723405, 0.6474134903246308, 0.0, 0.3062052505966587, 0.7704161510118073, 0.16108765491481541, 0.49752934863906867, 0.4734664813860761, 0.09820294554789893, nan, 0.17153699720635862, 0.514555863370054, 0.4660696051735875, 0.08826901031715705, 0.8991007829081079, 0.829742650939299, 0.9612781430019607, 0.01112666737555973, 0.1861992251927429, 0.391388886866003, 0.0]
- Per Category Accuracy: [nan, 0.9255583122183136, 0.9555184973850358, 0.8927561553139153, 0.7130378697969978, 0.6275811980710011, 0.0, 0.7474676455043131, 0.8545937449541465, 0.43523520560447965, 0.9672661630501664, 0.28627436744473084, 0.0, 0.0707036205718747, 0.47675012774655084, 0.7689381524189783, 0.0, 0.31600985221674877, 0.9278457312029238, 0.2055231456928555, 0.6363063556709445, 0.5255962863991213, 0.10240946878962942, nan, 0.30514996921453075, 0.6575213496395762, 0.6054551483999336, 0.08830275229357798, 0.9550074747938649, 0.8984159398975186, 0.9823971352874257, 0.013025497748978224, 0.3256981066248004, 0.49491941043060034, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Miou | Macc | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:----------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 0.4111 | 5.0 | 250 | 0.5342 | 0.3203 | 0.3895 | 0.8534 | [nan, 0.7411544992329885, 0.8587185188919024, 0.5322704558305212, 0.6145803724062279, 0.4207354824823325, 0.0, 0.4207652960849892, 0.6330214639515686, 0.0, 0.8090628889518269, 0.0, 0.0, 0.0, 0.0, 0.5525831345173927, 0.0, 0.0, 0.7449180731329554, 0.0, 0.39030048997684846, 0.5341813036240857, 0.0, nan, 0.0, 0.33603046089798805, 0.0, 0.0, 0.8611153164212525, 0.7580460497843906, 0.9307216449484303, 0.0, 0.12255543837545918, 0.30973651706611804, 0.0] | [nan, 0.865566426722594, 0.9394823497202754, 0.7339862219054845, 0.6960293899277608, 0.579233048631689, 0.0, 0.5226808772686938, 0.8148925583341846, 0.0, 0.954498711658196, 0.0, 0.0, 0.0, 0.0, 0.7453393323599813, 0.0, 0.0, 0.8609332075296946, 0.0, 0.5752897519263941, 0.6257335170644275, 0.0, nan, 0.0, 0.48320796165623753, 0.0, 0.0, 0.9563707209678979, 0.8591391181347248, 0.9690236728180618, 0.0, 0.23234437690377469, 0.43908949309871237, 0.0] |
| 0.2527 | 10.0 | 500 | 0.5899 | 0.3521 | 0.4258 | 0.8567 | [nan, 0.7536144931874272, 0.8666514611419747, 0.5791278186302583, 0.5507597043116981, 0.38697553330878387, 0.0, 0.49981379939131665, 0.6547462641660816, 0.006951340615690168, 0.8411064971463371, 0.21915505349651998, 0.0, 0.0, 0.0, 0.5704538365564567, 0.0, 0.0, 0.7601855085224487, 0.12506138175041864, 0.39942757047955846, 0.4668252406895441, 0.0, nan, 0.1030902538148915, 0.3805134719351324, 0.3639179515418502, 0.0, 0.8767798800448732, 0.7800121144818535, 0.9401348565379605, 0.00018008110081004338, 0.16755112790045706, 0.3264804931974313, 0.0] | [nan, 0.90406121258153, 0.933431363952898, 0.7264726392177598, 0.5740020955021516, 0.6563755737609668, 0.0, 0.6039363626224962, 0.8605186066359769, 0.0072522755864722855, 0.9522222704681134, 0.25318546484190657, 0.0, 0.0, 0.0, 0.7265874080033372, 0.0, 0.0, 0.9034801649669348, 0.15050382604742785, 0.6282930136175867, 0.4977144779061467, 0.0, nan, 0.1478142316826458, 0.4757332103391217, 0.43831868678494446, 0.0, 0.9461766367056283, 0.8685344399584078, 0.9696726615409282, 0.00019363628190676414, 0.39697811413925904, 0.4314488757452496, 0.0] |
| 0.1643 | 15.0 | 750 | 0.5756 | 0.3745 | 0.4534 | 0.8670 | [nan, 0.7726733036696652, 0.8671375594955328, 0.6103086102682944, 0.6314757371793478, 0.4273275344315441, 0.0, 0.5317600409405491, 0.6720224116289428, 0.16158774132109774, 0.8523694222801956, 0.24038155802861685, 0.0, 0.0, 0.04680851063829787, 0.5899459811865512, 0.0, 0.0, 0.7737178234025645, 0.14913933159903917, 0.4521741438458425, 0.5380504294958312, 0.0, nan, 0.14005003894540563, 0.40247802412573747, 0.41014102702120786, 0.0, 0.8822285387940414, 0.7982006290565458, 0.9485248204807992, 0.0010217644126931384, 0.12182141082818915, 0.3359618308006764, 0.0] | [nan, 0.8685068344016257, 0.9339337963085826, 0.7830275791780654, 0.71311646057369, 0.6411881935971181, 0.0, 0.7043771304992945, 0.8750572549898341, 0.18416833172993907, 0.9605602195211583, 0.301884052709914, 0.0, 0.0, 0.047777210015329585, 0.7549536664580913, 0.0, 0.0, 0.9068618510561295, 0.22672929767406622, 0.5668210000093578, 0.6053490157566916, 0.0, nan, 0.26095083120767, 0.5263161648629628, 0.5264190570939037, 0.0, 0.9540034951620896, 0.8931918202171408, 0.9742561443961733, 0.0012759247861356422, 0.2862606175274747, 0.465761930571415, 0.0] |
| 0.1302 | 20.0 | 1000 | 0.6021 | 0.3949 | 0.4802 | 0.8703 | [nan, 0.7801307689107726, 0.8786287731596124, 0.5996414476192669, 0.5791044393247451, 0.40685088294894184, 0.0, 0.5532316603013168, 0.7004119209771223, 0.3567117426846971, 0.8682022390275189, 0.4354632088736135, 0.0, 0.08566271525440265, 0.0, 0.592928092042186, 0.0, 0.07216748768472907, 0.7775334326155094, 0.16241710128130835, 0.46182139479806994, 0.562496456296332, 0.0, nan, 0.17592232145836345, 0.4180433534862313, 0.4040778498609824, 0.0, 0.8871669760617459, 0.8059650048666752, 0.9507085299921569, 0.0116151761673367, 0.16524860560484375, 0.34088472074456944, 0.0] | [nan, 0.901160937374861, 0.9425971578567806, 0.7984110745840901, 0.6312022008440196, 0.6127889140665853, 0.0, 0.6839893129548904, 0.8679197408614445, 0.4606921729692395, 0.9554783385950772, 0.5059353105601336, 0.0, 0.08568361471650239, 0.0, 0.7677860214733371, 0.0, 0.07216748768472907, 0.9086920613558305, 0.26755814834457153, 0.6342091828512193, 0.6617058325161462, 0.0, nan, 0.347809833758466, 0.541995549384712, 0.5421986403581496, 0.0, 0.9485582664128994, 0.9007181197365832, 0.9752496697792675, 0.013976390204770367, 0.39040296284368586, 0.42825081431510703, 0.0] |
| 0.1124 | 25.0 | 1250 | 0.5783 | 0.4085 | 0.4818 | 0.8809 | [nan, 0.8123818380472958, 0.8869254012115516, 0.5989965500806077, 0.6513288286982387, 0.45923979621249245, 0.0, 0.551056327882726, 0.7019146834355392, 0.2950008215576734, 0.8706733575298916, 0.3601874581566615, 0.0, 0.10517468206402572, 0.08712413261372398, 0.6136850006388144, 0.0, 0.2600985221674877, 0.7849825834204975, 0.17919511788917702, 0.45289730566932423, 0.5903637402399543, 0.0, nan, 0.18690435558822757, 0.42362687815353783, 0.43259719089833193, 0.0, 0.8841707465292419, 0.8032936112469397, 0.952030831872504, 0.008140849441390317, 0.16554455213884192, 0.3617462711649899, 0.0] | [nan, 0.9178324592492587, 0.9561686622912909, 0.7680310658482571, 0.7215460770544782, 0.5924548254023589, 0.0, 0.6491584679315913, 0.8452550030151549, 0.35430079676361037, 0.9581720479074639, 0.410135404944277, 0.0, 0.10532350931980092, 0.11548288196218702, 0.763316547977315, 0.0, 0.2600985221674877, 0.9177799037685564, 0.22825214031366012, 0.572804752898559, 0.6994932257437348, 0.0, nan, 0.31308822235904654, 0.5407402476367994, 0.5353727961089925, 0.0, 0.9583768797437656, 0.8894811289823983, 0.976516152184038, 0.010158989218608448, 0.379761952685748, 0.458744875997832, 0.0] |
| 0.1 | 30.0 | 1500 | 0.6125 | 0.4071 | 0.4817 | 0.8777 | [nan, 0.7976347312880722, 0.8842065126488408, 0.6220522211975981, 0.5992989007197456, 0.4754131699628208, 0.0, 0.5620206554196702, 0.7103054176260091, 0.3001608040201005, 0.8696414262339918, 0.2710134279595442, 0.0, 0.10693402202514375, 0.19945219123505975, 0.6229581109493774, 0.0, 0.21330049261083744, 0.7784639440974739, 0.1842071699891868, 0.4662422580117327, 0.5517361225824782, 0.001549819657348963, nan, 0.17199259716224552, 0.43358794468966694, 0.4268464617063853, 0.0, 0.8891718707035294, 0.8054920070330026, 0.9535609872146814, 0.01007383935063937, 0.16253665133576994, 0.3658318614584579, 0.0] | [nan, 0.8697241860632949, 0.9524319715036934, 0.8257718568242948, 0.7468530628299254, 0.5881267793852769, 0.0, 0.7164141181490659, 0.8437754352203041, 0.3683613310639138, 0.9597225061081064, 0.31468036446800013, 0.0, 0.10708012101102762, 0.20464997445068983, 0.7651242017023728, 0.0, 0.21330049261083744, 0.9302847679052355, 0.2516781574361694, 0.5984553495582629, 0.5925001383659759, 0.0015499506833873467, nan, 0.27588178379804734, 0.5562888715598076, 0.518736527938982, 0.0, 0.9534904946715259, 0.896063924459724, 0.9793106212730868, 0.011784150870325931, 0.3741529460703407, 0.47874361308587277, 0.0] |
| 0.0886 | 35.0 | 1750 | 0.6327 | 0.4115 | 0.4892 | 0.8822 | [nan, 0.8188467619727383, 0.8891141466002311, 0.6466411212625193, 0.6192450697021801, 0.4878651026475247, 0.0, 0.5804609572704323, 0.6873373994573425, 0.24242875689020368, 0.8707606811583432, 0.23605331403413546, 0.0, 0.13050222997866978, 0.2175902389425521, 0.6145514015738078, 0.0, 0.21711822660098523, 0.7803908730722577, 0.17679611946673174, 0.4549480658658346, 0.5467616324171395, 0.03352848701685911, nan, 0.19210202055245182, 0.44554925412112634, 0.43457869634340224, 0.06254767353165523, 0.8901464405497997, 0.8074494955970959, 0.9551576666105007, 0.009091384084852917, 0.16846681832699967, 0.3645371672657186, 0.0] | [nan, 0.9119479474792671, 0.9590241346815159, 0.845415986574404, 0.6953594791245139, 0.6061356109464877, 0.0, 0.7276180593606199, 0.892360619111798, 0.28098867756974766, 0.9616378091517278, 0.2841688750136131, 0.0, 0.1313555186883966, 0.21870209504343383, 0.7725733241957431, 0.0, 0.21711822660098523, 0.9161171536509721, 0.21953178271081142, 0.5994171169644333, 0.6121438495259369, 0.034859799915457235, nan, 0.3531093323951095, 0.6003076440268559, 0.6043221135245676, 0.06269113149847094, 0.9560401237295135, 0.884834427780536, 0.9792357012514029, 0.010829800623785451, 0.34680568415120167, 0.46030641401411304, 0.0] |
| 0.0795 | 40.0 | 2000 | 0.6240 | 0.4282 | 0.5017 | 0.8835 | [nan, 0.8168639361241289, 0.8876591799999074, 0.6570827724213207, 0.6202745367509233, 0.48734716072991435, 0.0, 0.5833200793037147, 0.7249773695346732, 0.31780959887896304, 0.8734250949568915, 0.2279957413675295, 0.0, 0.19478847928505513, 0.2588001983143282, 0.6266940289672047, 0.0, 0.3150246305418719, 0.7870743183835168, 0.18024107181885737, 0.48180217328687497, 0.5880553963585522, 0.042404523149135905, nan, 0.17844859516376527, 0.45068592007174485, 0.44004742517113327, 0.19134396355353075, 0.892022331516544, 0.8143712718909341, 0.9551036492731949, 0.016888403579096854, 0.15958069694966476, 0.36017381107545093, 0.0] | [nan, 0.8991722677575189, 0.9610711923215693, 0.8649585814233277, 0.7118098889111815, 0.594659810586253, 0.0, 0.7184642464033051, 0.8603538440753031, 0.3580502725770246, 0.9623214298952487, 0.26042763277307873, 0.0, 0.1956914218795745, 0.26673479816044965, 0.788603835801476, 0.0, 0.3150246305418719, 0.9230146256606502, 0.2499204485188272, 0.6278490409879275, 0.6625473814771242, 0.04480766521065239, nan, 0.29663998592664265, 0.6117266104950834, 0.5436080252031172, 0.1926605504587156, 0.9509771523653007, 0.887874399303051, 0.9819309132416605, 0.02064301076756039, 0.34012318344672116, 0.46386756263254, 0.0] |
| 0.0754 | 45.0 | 2250 | 0.6471 | 0.4302 | 0.5116 | 0.8840 | [nan, 0.8281984303346407, 0.8897375767546668, 0.6335678497580041, 0.6461049225195123, 0.4896718508137295, 0.0, 0.5769963172973805, 0.7160045601555046, 0.31492773499314275, 0.8789298786291031, 0.41197707824430413, 0.0, 0.19778300628229073, 0.19288119288119288, 0.6158351667955045, 0.0, 0.26785714285714285, 0.7860686941589031, 0.17655380387956127, 0.40860437517167547, 0.5549189258475934, 0.060120717954148355, nan, 0.1768935762224353, 0.45137771772158236, 0.44662611174687306, 0.24400299850074963, 0.8917308479385957, 0.8178316117483762, 0.9546822647246874, 0.0181622066651208, 0.17782411648425822, 0.3692233084050129, 0.0] | [nan, 0.9127907293988842, 0.9579732772469148, 0.8546614098408393, 0.7189306666878257, 0.617758410318982, 0.0, 0.7117038660531152, 0.8630197023070054, 0.3681886578356644, 0.9609314187010253, 0.48673176752459435, 0.0, 0.19893627403142383, 0.2049054675523761, 0.8228995957609527, 0.0, 0.26785714285714285, 0.9313026975574736, 0.22976740662171377, 0.482567055983181, 0.7372479787923986, 0.06399887276313936, nan, 0.30833846424487643, 0.6003932327823953, 0.6147681423755044, 0.24885321100917432, 0.947219534571164, 0.890392783205778, 0.9840024279813396, 0.0241042593066438, 0.3959314574024127, 0.47575603698227187, 0.0] |
| 0.0746 | 50.0 | 2500 | 0.6936 | 0.4117 | 0.4867 | 0.8749 | [nan, 0.7957936899551392, 0.8814366206724774, 0.5436114176098814, 0.6151632247714599, 0.4361122655202057, 0.0, 0.5671206613898421, 0.7141211613500584, 0.3419340943355589, 0.870823541579283, 0.1755482015278508, 0.0, 0.14690036810414178, 0.3004324599338591, 0.6098619199234538, 0.0, 0.16824615384615385, 0.7756330550603614, 0.17781881780267358, 0.4502871856554716, 0.45687245610992666, 0.06802365130029826, nan, 0.19639260088210125, 0.4534812252031405, 0.42577189666036547, 0.27414561664190196, 0.8856918914231561, 0.8034178358523514, 0.9553431034562543, 0.003146721773436032, 0.12501083138368427, 0.36698838817524204, 0.0] | [nan, 0.9180616367888037, 0.9472426408781908, 0.8013172716614175, 0.7302314913997886, 0.524968334204869, 0.0, 0.7028351702702309, 0.8539511709675187, 0.39162288166950343, 0.9630569443900208, 0.20604784550041746, 0.0, 0.14701863960183467, 0.30173735309146654, 0.7343444275597668, 0.0, 0.16834975369458127, 0.9160513108532854, 0.22030456852791877, 0.6251207408000449, 0.5084687072928094, 0.0732703959419473, nan, 0.3280191749494239, 0.6051695608345855, 0.5601890233792074, 0.28211009174311924, 0.9506023739291599, 0.8749006566683216, 0.9851772884487643, 0.003475079702076749, 0.25166727050709176, 0.48520419707741125, 0.0] |
| 0.067 | 55.0 | 2750 | 0.6778 | 0.4277 | 0.5121 | 0.8816 | [nan, 0.8194645919335458, 0.8855287302003849, 0.6053350056000855, 0.654773528870809, 0.4697667824136534, 0.0, 0.5710052174119353, 0.7267313389676074, 0.3551371282700238, 0.8755668722529796, 0.19964417520695182, 0.0, 0.13208006623484148, 0.3486218302094818, 0.6180969846096706, 0.0, 0.20360432519022828, 0.7807972584967618, 0.18003748362164762, 0.4432680689367132, 0.45560830868332836, 0.08040790777737207, nan, 0.1822721323375752, 0.45364137665335047, 0.45602216206006424, 0.36234396671289876, 0.8940119674114063, 0.8166972645181432, 0.9573128637395036, 0.03034622884202592, 0.18678678678678678, 0.3686953575810984, 0.0] | [nan, 0.9128102675762187, 0.9552788883754972, 0.8469619991264167, 0.7317413411289339, 0.5563598861193423, 0.0, 0.746876075856685, 0.8778384470140081, 0.40151459088778707, 0.9621413903500922, 0.24645151922169384, 0.0, 0.13233141407241145, 0.4039345937659683, 0.7933911385238819, 0.0, 0.31305418719211825, 0.9172193620842494, 0.246723236608834, 0.6190652452610861, 0.49203224849677785, 0.08688178103423982, nan, 0.36441199753716247, 0.5990761169332598, 0.6004808489471066, 0.39946483180428133, 0.9529255570362644, 0.9017875242386488, 0.9826782046681377, 0.03842297079549934, 0.3380298699730285, 0.48104842741150405, 0.0] |
| 0.0687 | 60.0 | 3000 | 0.6879 | 0.4291 | 0.5100 | 0.8823 | [nan, 0.8196948326057852, 0.8831657666830767, 0.6467890499563872, 0.6516417841503617, 0.4681981224281317, 0.0, 0.5880231738461575, 0.724187852815783, 0.2984189272432753, 0.8789400109991544, 0.2520251481078467, 0.013058335367341958, 0.10452562571588721, 0.3387726959319697, 0.616015263989506, 0.0, 0.29198813056379824, 0.784720416853429, 0.1792090810910177, 0.44576935641947074, 0.48202529113784476, 0.09516336506303061, nan, 0.18832282614869086, 0.45650264775637484, 0.4556005895357406, 0.2745886654478976, 0.8952007693743541, 0.8138636450290234, 0.9572388978933325, 0.010588595444700982, 0.1924104882672224, 0.35264721130282095, 0.0] | [nan, 0.9097946995213146, 0.9621478252989295, 0.8651175889100899, 0.7142492257108215, 0.566678868165708, 0.0, 0.7143145307931398, 0.867036613536233, 0.32265226078590986, 0.9605385088701248, 0.3026826877699931, 0.020373191165270373, 0.10464038255099053, 0.3765968318855391, 0.820913163096054, 0.0, 0.30295566502463056, 0.9221738131341896, 0.23635123873020683, 0.6270817065600855, 0.5335377838453821, 0.1042412286881781, nan, 0.364565924883455, 0.6191561899689979, 0.615072127342066, 0.2870795107033639, 0.9511700815454721, 0.8878140719993255, 0.9816183488199858, 0.012797283559588108, 0.43857601009084446, 0.4393736482895436, 0.0] |
| 0.0629 | 65.0 | 3250 | 0.6960 | 0.4222 | 0.4985 | 0.8831 | [nan, 0.8385937711298211, 0.8847888472425782, 0.6149328797554199, 0.6525561252288588, 0.48169461209819614, 0.0, 0.5971548536892575, 0.7135824408049566, 0.27369317672375143, 0.8823153606699299, 0.36233237512950345, 0.0, 0.14186935456382538, 0.24867603039373704, 0.6160654277501113, 0.0, 0.08275862068965517, 0.7805731162375585, 0.17752144045477705, 0.44269702931252913, 0.48059292296084216, 0.08923905090414969, nan, 0.17907819011708673, 0.4873286783174559, 0.4527498618417013, 0.22163588390501318, 0.8943575787945166, 0.8201429759960558, 0.9581956395009911, 0.019349515805194163, 0.1776665799886147, 0.3600628431614535, 0.0] | [nan, 0.9267115927398949, 0.9487909172436692, 0.8034188525406715, 0.7109193193887602, 0.6603282784265877, 0.0, 0.725064612012743, 0.8812655082760235, 0.3011914452749204, 0.9638660632870772, 0.45070243583693326, 0.0, 0.14216356006636088, 0.27593254982115484, 0.7556489178908058, 0.0, 0.08275862068965517, 0.9325111323710189, 0.2190847791499356, 0.625637495828009, 0.527886775476724, 0.09623784697759617, nan, 0.31748614653883367, 0.6121141373604427, 0.6112861327585254, 0.22477064220183487, 0.9484829229283243, 0.8975837228691066, 0.9804554182923197, 0.025625687235911233, 0.3392241321471224, 0.46249546141014647, 0.0] |
| 0.0629 | 70.0 | 3500 | 0.7101 | 0.4217 | 0.4989 | 0.8789 | [nan, 0.786640982710835, 0.8880498247990368, 0.6213814597589751, 0.6071277471550605, 0.4592909171926279, 0.0, 0.5867507688789444, 0.7333167906428527, 0.3791430524621254, 0.8814043667546686, 0.28100956352915796, 0.0, 0.0984556925025576, 0.3509064388414253, 0.6349712777519019, 0.0, 0.0, 0.7875471953847744, 0.1780650489932298, 0.4450250049891278, 0.4999114269705531, 0.08133596346637123, nan, 0.18414185986147352, 0.4677542129328365, 0.45241313162139773, 0.28850405305821664, 0.8909480603280158, 0.8200828649597152, 0.9579545152813692, 0.007145844060159359, 0.17539286131557424, 0.37038789587688453, 0.0] | [nan, 0.8598412127047438, 0.9543510233299178, 0.8082120661777665, 0.7612604902628672, 0.5904985183894021, 0.0, 0.7265619620716575, 0.8569696210790629, 0.424110116184415, 0.9631369031291932, 0.35466656986241696, 0.0, 0.09861422855469894, 0.43025038323965253, 0.8124211634536118, 0.0, 0.0, 0.9232874343190659, 0.2156072429729525, 0.6237066827758375, 0.5406547619892345, 0.08482457376356208, nan, 0.2963981000967543, 0.6442741122544078, 0.57353672691096, 0.2993119266055046, 0.9601885858498842, 0.8969635881631085, 0.9813282126850572, 0.008685970359817705, 0.3223031815681065, 0.45102269558033437, 0.0] |
| 0.056 | 75.0 | 3750 | 0.6888 | 0.4319 | 0.5074 | 0.8864 | [nan, 0.846983759179929, 0.8871265021170364, 0.6327919532904038, 0.6690289787883766, 0.4809385638926465, 0.0, 0.5929931910773564, 0.7319858245513943, 0.3873577190849818, 0.8821459096044979, 0.31863963925997724, 0.0, 0.23505840639191783, 0.3168200047180939, 0.6339963432877168, 0.0, 0.0, 0.7891815340906951, 0.16853589090364154, 0.44962094152977145, 0.5116482092488317, 0.10324211857041271, nan, 0.19139417066912298, 0.46438574150773454, 0.4679743443307121, 0.26584176977877766, 0.893033114012553, 0.8167232339927487, 0.958758389465055, 0.00683255888015518, 0.17629150606516764, 0.37230474365117394, 0.0] | [nan, 0.9383299434889024, 0.9547491546521122, 0.8273944994904098, 0.7246575916990003, 0.6112474580210331, 0.0, 0.7317551677487866, 0.879380624581915, 0.4323244283282765, 0.9640948194150409, 0.41171452426761534, 0.0, 0.23614228554698935, 0.34312723556463975, 0.7955762144552705, 0.0, 0.0, 0.9198500013278298, 0.21301613758618076, 0.6485306793405083, 0.5580917132262262, 0.11046921234324363, nan, 0.3198390359750198, 0.6043089183483272, 0.5887636102360029, 0.27102446483180426, 0.9532878705576775, 0.8931910708096411, 0.9816932688416696, 0.008160386166070774, 0.350442145377937, 0.47428658629635284, 0.0] |
| 0.0545 | 80.0 | 4000 | 0.7242 | 0.4313 | 0.5097 | 0.8839 | [nan, 0.8315018755718794, 0.8905184158955881, 0.5801625429382188, 0.6532970384376523, 0.4694179481073208, 0.0, 0.5983799840636467, 0.7235855215136249, 0.3640520350334879, 0.8784869607735561, 0.3143670199951819, 0.0, 0.2781527188584651, 0.3326551373346897, 0.6281559683282705, 0.0, 0.08645320197044334, 0.7821189057727206, 0.19111444811384393, 0.4452253857934852, 0.4994405348435919, 0.10157298545122671, nan, 0.17629709076283684, 0.46700401281623927, 0.4615519817207136, 0.2734785875281743, 0.8899163053914229, 0.8095455355998507, 0.9581430685733312, 0.005790762673569464, 0.17969789570113207, 0.36411010043900494, 0.0] | [nan, 0.9123876444791007, 0.9612296601404773, 0.8930262764661256, 0.7126551176008956, 0.5714955551682064, 0.0, 0.7523059093928652, 0.865652608026573, 0.41833789684007994, 0.9665973690927172, 0.37897048680437073, 0.0, 0.2815458182882795, 0.41773122125702605, 0.7517058490509818, 0.0, 0.08645320197044334, 0.9215057282136607, 0.24684445791347828, 0.6202401611194349, 0.5561602661167979, 0.10663660701704945, nan, 0.2891635148210045, 0.6095369648325313, 0.5805560161388382, 0.2782874617737003, 0.9568961863731891, 0.870963644368671, 0.9845664755331252, 0.007178373593543613, 0.36061350187190533, 0.46088130206223, 0.0] |
| 0.05 | 85.0 | 4250 | 0.7236 | 0.4310 | 0.5096 | 0.8865 | [nan, 0.8344804679717858, 0.891480804753714, 0.6039392215856049, 0.6561901191296589, 0.5040396418009069, 0.0, 0.5972644983662688, 0.7352912849624103, 0.4166594809002328, 0.882374306124748, 0.291759692976696, 0.0, 0.11696789594193015, 0.4100259636508888, 0.6420473687097001, 0.0, 0.0, 0.7822126517859589, 0.18499892874997023, 0.45949977357159744, 0.5246592278602004, 0.10855595092676192, nan, 0.18756695799266987, 0.4678528011435098, 0.4557543571262987, 0.2325056433408578, 0.8913224348625648, 0.8136362687377343, 0.9598605495290813, 0.008994566889922168, 0.1923180020267399, 0.3698758474475382, 0.0] | [nan, 0.9238238149259353, 0.9605341564359651, 0.8564066606895178, 0.714878329764632, 0.6240479925628958, 0.0, 0.7253836717079392, 0.8553615384868866, 0.47677545080046374, 0.96226053416674, 0.36290703161868804, 0.0, 0.11715624085098078, 0.5245273377618804, 0.8139308522789349, 0.0, 0.0, 0.9272880427065164, 0.23551026592923707, 0.5960753651336961, 0.5733261619548913, 0.11520360715795407, nan, 0.29259389568123845, 0.634985354812941, 0.5976344442602112, 0.23623853211009174, 0.9580478059949592, 0.8761671553428071, 0.9800020805814939, 0.011116797255897263, 0.39472377655220536, 0.470034782700211, 0.0] |
| 0.0483 | 90.0 | 4500 | 0.7448 | 0.4348 | 0.5119 | 0.8858 | [nan, 0.8389020217362697, 0.8904583684155554, 0.6053893552299984, 0.6609445788027536, 0.48826307798392343, 0.0, 0.5990805851530085, 0.741553407283815, 0.3904125924159313, 0.8810578364409596, 0.24072208997131173, 0.007595345830639948, 0.11408382066276804, 0.3854978354978355, 0.6358003169572107, 0.0, 0.3205665024630542, 0.7799325512458637, 0.18157179971658008, 0.44179222083868513, 0.4810432700260739, 0.10200241902970031, nan, 0.17958766620104505, 0.47953821940837715, 0.46267085062022195, 0.20652173913043478, 0.8936165310088457, 0.8196186094828226, 0.9601551959806593, 0.007783159441927215, 0.17946660884648744, 0.3712830781592127, 0.0] | [nan, 0.9268645537858738, 0.9579552943101062, 0.8624259561522487, 0.7130170885820071, 0.6134222299692057, 0.0, 0.7456444472460493, 0.8743388902252963, 0.44418954586940973, 0.9629775151789223, 0.28632881983519076, 0.00894897182025895, 0.11422855469893628, 0.45503321410321923, 0.7969369208307261, 0.0, 0.3205665024630542, 0.9184567677287768, 0.2329797711947875, 0.6319321335328264, 0.5199750799329599, 0.10694659715372692, nan, 0.30532588618172224, 0.6374674287235863, 0.6071132482175426, 0.2106269113149847, 0.9560636685684433, 0.8940191660968048, 0.9818139998320264, 0.009301457113021348, 0.3331991465721992, 0.46443061088103893, 0.0] |
| 0.0488 | 95.0 | 4750 | 0.7572 | 0.4392 | 0.5164 | 0.8870 | [nan, 0.8412265993316759, 0.8902791647105773, 0.6166091899398941, 0.6573127590169391, 0.49795139519110443, 0.0, 0.6045930992650757, 0.740872213808363, 0.3893914038172305, 0.8838233368096821, 0.33872329970362863, 0.004128819157720892, 0.1232210193407128, 0.36835222319093286, 0.6420211202135859, 0.0, 0.2602216748768473, 0.7833929304386752, 0.17934607063412256, 0.4671484042901698, 0.5449281805918343, 0.09757754723390911, nan, 0.1862480907024973, 0.4739074459454693, 0.46393408427200666, 0.20655861289106672, 0.8908646555131348, 0.8077701092850268, 0.959734031170495, 0.015509419333207602, 0.19220623899538222, 0.36528917777672343, 0.0] | [nan, 0.9329796512523355, 0.9594185059351048, 0.832704966397695, 0.7156041609282175, 0.6057294753355412, 0.0, 0.740442513492152, 0.8672541001163223, 0.4534398973827672, 0.964824509100999, 0.4003702762551276, 0.00476009139375476, 0.1235727530008783, 0.4317833418497701, 0.8025088644557671, 0.0, 0.2602216748768473, 0.9244890653768502, 0.22295628456701266, 0.6153075940114643, 0.6122502848919965, 0.10522756094124278, nan, 0.32980033424223765, 0.6388606234665348, 0.6146299673907036, 0.20948012232415902, 0.9577606974590687, 0.8682935054472558, 0.9823331908103197, 0.02047357902089197, 0.388175462608859, 0.4557849260933397, 0.0] |
| 0.0466 | 100.0 | 5000 | 0.7516 | 0.4340 | 0.5089 | 0.8868 | [nan, 0.8369914869418346, 0.8917253025027853, 0.63431934846412, 0.6595590976640465, 0.490185886416082, 0.0, 0.6019878455204862, 0.7389529158865543, 0.34824032232931906, 0.8841782288939659, 0.3149823779040495, 0.0, 0.1793690267212795, 0.3540386803185438, 0.6423088361774469, 0.0, 0.145935960591133, 0.7781632167836338, 0.18123317726357693, 0.45431638450718936, 0.5090139572607015, 0.10249373268241192, nan, 0.1875506294119916, 0.501633275054173, 0.45008636966215404, 0.17736422331940752, 0.8917030821290204, 0.8118398661365593, 0.9594706627009374, 0.014780075321537696, 0.20062550586608202, 0.37857391883524044, 0.0] | [nan, 0.9373271597386813, 0.9596797489625617, 0.8314003387051043, 0.7185675621858967, 0.5884759746673639, 0.0, 0.7444904015400207, 0.8778911710334237, 0.3858999975332396, 0.9637834569075349, 0.3974298471702908, 0.0, 0.17949155850492826, 0.397547266223812, 0.7936692390969677, 0.0, 0.145935960591133, 0.9165776142827953, 0.24282142586559588, 0.6377640831341348, 0.5628898195281933, 0.10945469916866281, nan, 0.3207406104318761, 0.6268758202255739, 0.6192450118830487, 0.17851681957186544, 0.9569449380396788, 0.8769881312587235, 0.9830475556030632, 0.01869973236699608, 0.34259221985158944, 0.47854628309223995, 0.0] |
| 0.0681 | 105.0 | 5250 | 0.7608 | 0.4243 | 0.4961 | 0.8801 | [nan, 0.8053305712022708, 0.8888831373349202, 0.6063781727951514, 0.6458484552441548, 0.4450952774354321, 0.0, 0.5835976764940738, 0.7449298281412959, 0.38801677910396126, 0.8805089961159074, 0.14255831144524309, 0.0, 0.1778948138395143, 0.3797164667393675, 0.6438507708603036, 0.0, 0.2848522167487685, 0.7757003332539172, 0.14560873446405273, 0.46351390150988186, 0.47026329896747027, 0.08670882625524723, nan, 0.16717484516436398, 0.49040240585388206, 0.4269185360094451, 0.09782193351165457, 0.8929769955183823, 0.8046204535691968, 0.9590862138793831, 0.04553666153467317, 0.1919049851539303, 0.36759942734721646, 0.0] | [nan, 0.8461725854729251, 0.9657024524747764, 0.8717211889928504, 0.7386199232908679, 0.5728516646330835, 0.0, 0.7229524174348182, 0.8661468957085944, 0.44266015441920126, 0.9636971438314745, 0.16451882237630233, 0.0, 0.17800331804430566, 0.44481349003576903, 0.8150531867346027, 0.0, 0.2848522167487685, 0.9260951906884237, 0.2249185544359421, 0.6512735360080518, 0.5153941017777545, 0.0896435113428209, nan, 0.23148473920309615, 0.6005358807082946, 0.49964074503951805, 0.09785932721712538, 0.9555801683760682, 0.8920875682663394, 0.9854006169210447, 0.0684193055373061, 0.28012828254364425, 0.47628225029862603, 0.0] |
| 0.0668 | 110.0 | 5500 | 0.7138 | 0.4340 | 0.5140 | 0.8742 | [nan, 0.7871483106350147, 0.8799748398030087, 0.6039422540580079, 0.58793837643889, 0.4164255041075429, 0.0, 0.6184209066896527, 0.7402801021253262, 0.3308593247243554, 0.8857427628712552, 0.35066959646049234, 0.0, 0.16199673226522301, 0.42935960591133004, 0.6284724323670036, 0.0, 0.3552955665024631, 0.7640465559057021, 0.1673140841039061, 0.4603793394796352, 0.4502083383450174, 0.08286035553651745, nan, 0.19144741314841254, 0.494703324736749, 0.49196363166286033, 0.21928518242740133, 0.8942953842754613, 0.8018772529737324, 0.9608524553067362, 0.025030461104976583, 0.16785196891874093, 0.3735661360500572, 0.0] | [nan, 0.8648334810431274, 0.9433503159465763, 0.7861368460577638, 0.8401580732564278, 0.456157108825751, 0.0, 0.7569977355489718, 0.8541785433012485, 0.38047312464540317, 0.9656267441330937, 0.428703670091117, 0.0, 0.1620718259002635, 0.5567194685743485, 0.8251045360189903, 0.0, 0.3552955665024631, 0.9128087725432023, 0.21700886430790212, 0.6164003697345833, 0.5046228427325222, 0.08721995209243343, nan, 0.3096138622570147, 0.6316283736234475, 0.6310175205880727, 0.22515290519877676, 0.9574614010065557, 0.8952916600312878, 0.9807011750513465, 0.036369043090988304, 0.3078378487178455, 0.47336308192615123, 0.0] |
| 0.0456 | 115.0 | 5750 | 0.7481 | 0.4396 | 0.5149 | 0.8874 | [nan, 0.8535949387776991, 0.889196790918221, 0.6590754161484988, 0.6643237184774637, 0.46255227979529023, 0.0, 0.6160656034941906, 0.7414819627132849, 0.33609977221984166, 0.881638905287202, 0.26364535016348567, 0.0, 0.11007294284111147, 0.47720425788310905, 0.6368556033975671, 0.0, 0.32869458128078816, 0.7703600738384895, 0.17442321190028753, 0.46530941552214283, 0.48260002610416075, 0.09418922868453915, nan, 0.20518864654252, 0.4743353551385976, 0.4722508031833358, 0.20610399397136397, 0.8954748076190832, 0.8187194150221221, 0.9605552926063987, 0.012601025462761798, 0.17920223292081403, 0.3762309075548745, 0.0] | [nan, 0.9413675139957597, 0.9627770101122414, 0.853864456654176, 0.7242582145309057, 0.5528162221834872, 0.0, 0.7381053284908671, 0.8687863919305888, 0.3676213029428452, 0.9679646105797591, 0.3146622136711802, 0.0, 0.11008099931687323, 0.6070516096065406, 0.8065015941122136, 0.0, 0.32869458128078816, 0.912257229374579, 0.23346465641336464, 0.6230433232027166, 0.5299729086514923, 0.09990136677469354, nan, 0.3254024100624505, 0.6366091637027598, 0.621511081633781, 0.2090978593272171, 0.9563050724169996, 0.8984035746737735, 0.9820063104609347, 0.01504138975525757, 0.32565785059646013, 0.47864626362234725, 0.0] |
| 0.0432 | 120.0 | 6000 | 0.7519 | 0.4416 | 0.5185 | 0.8876 | [nan, 0.8517831570119985, 0.8901004311397058, 0.6339355013970817, 0.6606286462755991, 0.4746063751504886, 0.0, 0.6132450026307165, 0.7426311341925447, 0.3602046617396248, 0.8859214231639748, 0.3273784162152292, 0.0, 0.15872087354977088, 0.4255713403335392, 0.6326264779996124, 0.0, 0.35557744397931546, 0.7741301715457662, 0.17043647800201933, 0.46161159879531216, 0.5113488607281433, 0.11327498751609766, nan, 0.19760381654559253, 0.47813157752711966, 0.46921250159026334, 0.1416030534351145, 0.8955479192568264, 0.8197854779969181, 0.9604275470620833, 0.010892456172159384, 0.18561124493594658, 0.3689976212003217, 0.0] | [nan, 0.9296893165774394, 0.9616835385667785, 0.87624044997203, 0.7260692029572803, 0.5797304049735634, 0.0, 0.7494101274784102, 0.8745695578102397, 0.39073484792422114, 0.9642129041755406, 0.3904962427850583, 0.0, 0.15887576851761492, 0.528104241185488, 0.8103950021354152, 0.0, 0.3556650246305419, 0.9162409381106233, 0.22253201000075765, 0.6204044413898943, 0.5625662560153721, 0.12145977173453572, nan, 0.3206086727064825, 0.6318803849592027, 0.6115348477311667, 0.14181957186544342, 0.9576213674122256, 0.8924536538299407, 0.9825164346850114, 0.013502672872248463, 0.3639547522241456, 0.4569004983240106, 0.0] |
| 0.0446 | 125.0 | 6250 | 0.7468 | 0.4334 | 0.5064 | 0.8877 | [nan, 0.8499567507325978, 0.8871076417101389, 0.6330569753090723, 0.6639770881242221, 0.4871746836767682, 0.0, 0.5980424732505424, 0.7360705192073508, 0.30519138810716817, 0.8812845049064242, 0.23256457139345144, 0.0, 0.13761825807080855, 0.4344916900496439, 0.6344221105527639, 0.0, 0.31022167487684726, 0.7799696347321634, 0.17147761834567948, 0.4735415094048958, 0.5082152629506022, 0.10032137118371719, nan, 0.19083052625766195, 0.477693792160024, 0.4774453072902102, 0.10550458715596331, 0.8982375671163275, 0.8273146135730871, 0.9607895023001171, 0.016035198543508544, 0.15227804315598747, 0.37272481048329426, 0.0] | [nan, 0.9294944628629415, 0.9603275161439091, 0.8696425971478271, 0.7134799429158917, 0.6058991342745919, 0.0, 0.7261197395153978, 0.8763951269825055, 0.32904117023113544, 0.9650643853185165, 0.2747304606672233, 0.0, 0.13769883868449304, 0.5143076136944302, 0.7674085992670063, 0.0, 0.31022167487684726, 0.9269199814674473, 0.20887946056519432, 0.6072557812618596, 0.5566839281178112, 0.10556573199943638, nan, 0.3039625296859882, 0.6508858436198338, 0.6133587575305367, 0.10550458715596331, 0.9551001306600062, 0.9014341786025424, 0.9824066792392325, 0.020100137620071783, 0.3101324423332394, 0.48336771260333516, 0.0] |
| 0.0401 | 130.0 | 6500 | 0.7766 | 0.4379 | 0.5140 | 0.8867 | [nan, 0.8468760227965516, 0.8886795707269431, 0.622437352951649, 0.6682970140214559, 0.4786959592750148, 0.0, 0.6085294389146897, 0.7427519649223919, 0.3908760790623845, 0.8822040839218181, 0.20753357844976364, 0.0, 0.17475089531512655, 0.47288964490750585, 0.6415406446381512, 0.0, 0.2750554050726422, 0.778568992850166, 0.17143968092188597, 0.46392364840506783, 0.4823894964669603, 0.09554546178978404, nan, 0.20017632982073136, 0.47654683547891147, 0.4713058003824428, 0.1655881233346022, 0.8956585893822123, 0.8232044008477167, 0.9608808597268595, 0.012288627559172788, 0.18044196123782585, 0.37141827889613904, 0.0] | [nan, 0.9354963797165556, 0.9559979333791044, 0.8707192502509636, 0.7183888437369763, 0.6083772006275057, 0.0, 0.7415955894118731, 0.866331429776549, 0.4434248501443055, 0.9662129317110005, 0.2510073692235089, 0.0, 0.17500243973846002, 0.5682166581502299, 0.7858128979072931, 0.0, 0.2751231527093596, 0.9165205505248, 0.2204712478218047, 0.6489507377535817, 0.5256828538301831, 0.10136677469353247, nan, 0.3145395373383763, 0.6396451870589802, 0.5925772398165036, 0.16628440366972477, 0.9558582744735443, 0.8949720377326676, 0.9824620341597123, 0.014785513239880775, 0.3475571300135529, 0.47879491888421727, 0.0] |
| 0.0532 | 135.0 | 6750 | 0.8100 | 0.4370 | 0.5099 | 0.8867 | [nan, 0.8418463475820702, 0.8855647993577028, 0.6407052153749961, 0.6672622261373646, 0.48550215050970236, 0.0, 0.6013553074721314, 0.7358587165510544, 0.41406543029797876, 0.8806464817122883, 0.20844846800909883, 0.0, 0.10624649381692236, 0.46624287593160896, 0.6367459896871661, 0.0, 0.2729064039408867, 0.7800250020493483, 0.16987653185041204, 0.47226725829848964, 0.5354231045094412, 0.10532085561497326, nan, 0.19529110166632935, 0.4793455617996517, 0.4643273310907372, 0.1317799847211612, 0.8929265734089717, 0.8098728542013477, 0.9610867606622594, 0.009269971902267766, 0.1905821312686735, 0.3815049812671639, 0.0] | [nan, 0.9263081557808802, 0.9609817135875093, 0.8755450316865522, 0.7097842872099934, 0.608116901981291, 0.0, 0.7151553355218178, 0.871465431167145, 0.49016995979180544, 0.9649383576369068, 0.24783097978001234, 0.0, 0.10627500731921538, 0.5434338272866632, 0.7518349671742002, 0.0, 0.2729064039408867, 0.918908888272893, 0.2238048336995227, 0.6329937167995292, 0.5943152161418457, 0.11100464985205016, nan, 0.31827777289119535, 0.6406199478859578, 0.5836235008014149, 0.13188073394495411, 0.9580930951851359, 0.8802967653698794, 0.9799166622128225, 0.011248193304333996, 0.3184117654952162, 0.4786317927561475, 0.0] |
| 0.039 | 140.0 | 7000 | 0.7955 | 0.4374 | 0.5145 | 0.8873 | [nan, 0.8453406127060666, 0.8894584400292076, 0.618765500137779, 0.6661462422914772, 0.48188110711842147, 0.0, 0.608878748711235, 0.7435697628283624, 0.3796956629902977, 0.8857966705291055, 0.3616908539636749, 0.0, 0.12437204311564161, 0.5013698630136987, 0.6370300461309403, 0.0, 0.18285784554845055, 0.7737808450225561, 0.16547070030804295, 0.47332405936901073, 0.47251187823235086, 0.09493722374379694, nan, 0.19320955193290454, 0.47309349183647703, 0.4585451464536432, 0.13724742661075104, 0.8963119205284326, 0.8287376073022066, 0.9613351708673005, 0.00971653416847346, 0.18365372022293688, 0.38471762753712496, 0.0] | [nan, 0.9325931121764209, 0.9570000189093305, 0.8775718982045564, 0.7170735817481989, 0.6105484864330951, 0.0, 0.7451100949905688, 0.8584129411105655, 0.45178716791238066, 0.9654472341160111, 0.4460376810541983, 0.0, 0.12442666146189128, 0.5610628513030148, 0.7776685239812083, 0.0, 0.183128078817734, 0.9154620178139884, 0.2112205470111372, 0.6293701931124976, 0.5103334549061737, 0.09866140622798365, nan, 0.31508927786084967, 0.6503865758791867, 0.6281711159011772, 0.13761467889908258, 0.9610215191517875, 0.9003190602429954, 0.981970520641659, 0.01171499505535923, 0.3387008037786992, 0.48129837873677234, 0.0] |
| 0.0406 | 145.0 | 7250 | 0.8306 | 0.4360 | 0.5141 | 0.8867 | [nan, 0.8435997939171356, 0.886366406157634, 0.6223465646375345, 0.6631770897769883, 0.4788596814657396, 0.0, 0.6085666309373553, 0.7410466976722848, 0.31492224002889196, 0.8837966051190714, 0.22238290725881693, 0.0, 0.13819236298949727, 0.5232347616173808, 0.6307999909800885, 0.0, 0.3076828669612175, 0.7764942343062243, 0.16667183036627153, 0.4750608982109485, 0.4864866269041335, 0.08490179473871118, nan, 0.1946730634021258, 0.47966615140417673, 0.46086619157494946, 0.12857687905379625, 0.8998584935109988, 0.8307591913787293, 0.9614240003370637, 0.006127383872241452, 0.19595372863270513, 0.37590210909466404, 0.0] | [nan, 0.9378495859578592, 0.954765284801492, 0.8295152378981893, 0.7149554091802339, 0.6165097902504213, 0.0, 0.7444147582080288, 0.8516346093644449, 0.3441624115049705, 0.9674811514482063, 0.25957454532253965, 0.0, 0.13835756806870303, 0.6645375574859479, 0.8335170783548365, 0.0, 0.308743842364532, 0.922605741887015, 0.2037881657701341, 0.6393476201715377, 0.5315353798252473, 0.08758630407214316, nan, 0.31051543671387105, 0.6595398177910493, 0.623721881390593, 0.12882262996941896, 0.9541870064066892, 0.8996719468670082, 0.9800173509043849, 0.007351263130960367, 0.3720998886249883, 0.47889753048090633, 0.0] |
| 0.037 | 150.0 | 7500 | 0.8222 | 0.4343 | 0.5077 | 0.8875 | [nan, 0.844207269702504, 0.8878295221933561, 0.6214984234657922, 0.6643742580050236, 0.48557575036716316, 0.0, 0.6097768299571183, 0.7465852256395515, 0.3695119182746879, 0.884482746916304, 0.229786147654232, 0.0, 0.10648001365753726, 0.45458553791887124, 0.6442341311464989, 0.0, 0.258520979451212, 0.7755187494699113, 0.17377147325464898, 0.4744249539706051, 0.5001041209924736, 0.08993947946915624, nan, 0.19405005327880656, 0.4817597684924271, 0.45507234290956095, 0.1162079510703364, 0.898116706658797, 0.8266099378191127, 0.9613809600564381, 0.008963162954562462, 0.1934702763543734, 0.37436200278398785, 0.0] | [nan, 0.9305498764782347, 0.9581999167444519, 0.848117198096508, 0.7216662302611518, 0.6072343268839695, 0.0, 0.7464716749664212, 0.8558986644346832, 0.40151459088778707, 0.9658946853385326, 0.27772534214252004, 0.0, 0.10651898116521909, 0.5268267756770567, 0.8052302772066784, 0.0, 0.258743842364532, 0.9231421412121703, 0.21550875066292902, 0.6330883339173254, 0.5487097904926255, 0.09339157390446667, nan, 0.29633213123405755, 0.648311048557354, 0.6032167136461615, 0.1162079510703364, 0.9569935512071162, 0.8966612022369814, 0.9819461835645514, 0.01076756039031542, 0.35058975081518456, 0.467014318264338, 0.0] |
| 0.0359 | 155.0 | 7750 | 0.8264 | 0.4336 | 0.5100 | 0.8876 | [nan, 0.8425150819450634, 0.8887259579748503, 0.6062849127025877, 0.6661436167605636, 0.477463082002611, 0.0, 0.608982398838398, 0.74429892821273, 0.3660286553193368, 0.8814051326012079, 0.18797448685125717, 0.0, 0.206084945843982, 0.4612220916568743, 0.6472122569202, 0.0, 0.1635491016490278, 0.7777400139827546, 0.16735784151426214, 0.4777184910568181, 0.5271252583451728, 0.1026913327220754, nan, 0.20569207071077533, 0.49218430887769665, 0.4574078290930921, 0.0779816513761468, 0.8958569293152772, 0.8268185544245148, 0.961547775435119, 0.016675747796079745, 0.1920671902330555, 0.3826628162758937, 0.0007393715341959334] | [nan, 0.9373452897590907, 0.9575174915394369, 0.8346226350031035, 0.7189249990837373, 0.592023705769566, 0.0, 0.743514796514588, 0.8620311269429625, 0.40898887491057995, 0.9669574481830303, 0.230551421207391, 0.0, 0.2065726554113399, 0.6016862544711293, 0.8009495148138216, 0.0, 0.16366995073891627, 0.9155667078623104, 0.2259489355254186, 0.6286330113925248, 0.5697087786470788, 0.10731294913343667, nan, 0.3083164746239775, 0.6503009871236473, 0.6106228928314817, 0.0779816513761468, 0.9592060735712507, 0.8998802821519236, 0.9815085933742069, 0.02063609518606372, 0.36394133354803215, 0.48287175655267134, 0.0011527377521613833] |
| 0.0335 | 160.0 | 8000 | 0.8518 | 0.4340 | 0.5059 | 0.8886 | [nan, 0.8436530764368111, 0.8895900440620743, 0.6082310506714941, 0.6647265197368698, 0.48458344251575175, 0.0, 0.6090840245108227, 0.7404627804506331, 0.38335284631867284, 0.8815549567555062, 0.18294506042107886, 0.0, 0.07282879016921051, 0.4207551435677142, 0.6530114804312678, 0.0, 0.3558657849620377, 0.7775443898061408, 0.17116698280457718, 0.4806890482304907, 0.4933879226304321, 0.09181473293485085, nan, 0.17767671317351422, 0.4911045514027132, 0.4719998327724242, 0.08830275229357798, 0.9007817953005852, 0.8305455831626325, 0.9611232513095775, 0.006788911045474309, 0.20454109523352834, 0.3848491020278139, 0.0] | [nan, 0.9335459063558043, 0.9603316031750019, 0.8547916810348131, 0.7148900428130813, 0.5963366451687874, 0.0, 0.7503401525473862, 0.8677483877983438, 0.42808160043414983, 0.9664967587586591, 0.2132355610411297, 0.0, 0.0728749878013077, 0.47547266223811957, 0.7982678307162083, 0.0, 0.3578817733990148, 0.9317873005484486, 0.20920524282142586, 0.6235621577277751, 0.5465768257567909, 0.09480061998027335, nan, 0.25343038086023395, 0.6593662628145387, 0.6239982313601945, 0.08830275229357798, 0.9526813832066575, 0.902749388764508, 0.9824663289380254, 0.007828438254230607, 0.3434241777706212, 0.48473192062598336, 0.0] |
| 0.0346 | 165.0 | 8250 | 0.8438 | 0.4379 | 0.5103 | 0.8883 | [nan, 0.8459468636033894, 0.8888331369606564, 0.6143356921364396, 0.6654980544147341, 0.48167853831328056, 0.0, 0.6135617243950853, 0.7453493425593741, 0.36501505490612823, 0.8871093023776453, 0.28924392439243923, 0.0, 0.11610167426217922, 0.44053678852383155, 0.6419692508995748, 0.0, 0.31108930323846906, 0.7764850703242182, 0.17769648792669843, 0.48261405652354455, 0.5041534749331448, 0.09703109762704519, nan, 0.1935639159166168, 0.4981157384329542, 0.45534552215680196, 0.08371559633027523, 0.8969250693293208, 0.8249491172270096, 0.9618063555393217, 0.009535384030237478, 0.19902344047093898, 0.3833148309593847, 0.0] | [nan, 0.9345559069102661, 0.95845124190979, 0.8289156072553392, 0.7178118816407789, 0.6027575387833363, 0.0, 0.7548031091349021, 0.8646673279137435, 0.4066947877352673, 0.9652041807300498, 0.34996551348604205, 0.0, 0.11622914023616668, 0.4864588656106285, 0.7796748209727561, 0.0, 0.31231527093596056, 0.925766196175982, 0.22965376164860973, 0.6295864608103177, 0.5586281474711666, 0.10094406087079047, nan, 0.2917802797079778, 0.6533227456872777, 0.6091029679986735, 0.08371559633027523, 0.9575749702296287, 0.8960586786072262, 0.9801156536079956, 0.011096050511407251, 0.3511399165358346, 0.48239553350137077, 0.0] |
| 0.0359 | 170.0 | 8500 | 0.8588 | 0.4298 | 0.5008 | 0.8882 | [nan, 0.843094419260262, 0.8900013429866321, 0.6133301326394077, 0.6661149601220273, 0.4853624310010443, 0.0, 0.6120054866295084, 0.7375298943289792, 0.3408351470819216, 0.8829721413070726, 0.22209681464760472, 0.0, 0.03861163959217523, 0.4175319971021492, 0.6376489814784245, 0.0, 0.28027511667894867, 0.7789104093843366, 0.17390202354217138, 0.47461354628029206, 0.516023356843965, 0.08927792321116929, nan, 0.18421222487575034, 0.4871304688103021, 0.45871426798494186, 0.05387848681696599, 0.8994123394635088, 0.8242101331834862, 0.9615335975044262, 0.007916133605582808, 0.22646747269605874, 0.37908474344043297, 0.0] | [nan, 0.9397440850808414, 0.9577213526503497, 0.8272086714637118, 0.7156158739766668, 0.6048939631630934, 0.0, 0.7494343721360998, 0.8668388984634243, 0.3747009053010681, 0.9680392740381917, 0.25716048934548225, 0.0, 0.03862105982238704, 0.441747572815534, 0.7673092776337614, 0.0, 0.2810344827586207, 0.9210972833920102, 0.22352450943253277, 0.6446638544934293, 0.5682428088718845, 0.09226433704382134, nan, 0.31379189022781245, 0.6451870589801624, 0.6128613275852539, 0.05389908256880734, 0.9556559273578009, 0.9013431255913293, 0.9796174593236774, 0.009692187467583211, 0.33278316761268334, 0.4804472286975694, 0.0] |
| 0.0342 | 175.0 | 8750 | 0.8689 | 0.4339 | 0.5051 | 0.8880 | [nan, 0.842207631443645, 0.8893284445771101, 0.6225399576081035, 0.6646476520665043, 0.48347573182283166, 0.0, 0.6145921797450942, 0.7331767170008916, 0.3267635558167394, 0.8840148558277702, 0.2103112515380292, 0.0, 0.10012921471584953, 0.3746216530849825, 0.6392775627666964, 0.0, 0.4631879914224446, 0.7770691785507862, 0.1792685215596115, 0.48551142385802487, 0.48582005237755577, 0.08915524176996963, nan, 0.18459143368114972, 0.48183353534471146, 0.4823333617820261, 0.029434250764525993, 0.897290929740743, 0.8192668128466759, 0.9613327742988569, 0.0055269961977186316, 0.2091533037018423, 0.3819620509014621, 0.0] | [nan, 0.9364984593883142, 0.9624452521749953, 0.8451305393993732, 0.7156570585663757, 0.5934239730404973, 0.0, 0.7383128627606906, 0.8535853980828229, 0.35661955154295866, 0.965347682838101, 0.2512977819726286, 0.0, 0.10020005855372303, 0.41108840061318347, 0.7677959536366616, 0.0, 0.47881773399014776, 0.9253472165067016, 0.23157814985983788, 0.6453594462715136, 0.5296791470411678, 0.09181344229956319, nan, 0.31548509103703054, 0.6294458603571904, 0.6247167412811585, 0.029434250764525993, 0.9557927644216986, 0.8822924375415687, 0.9825359997862155, 0.006282805789724829, 0.3573930196046858, 0.48477927982445523, 0.0] |
| 0.0621 | 180.0 | 9000 | 0.7787 | 0.4015 | 0.4924 | 0.8783 | [0.0, 0.8086755425666048, 0.8830559170088975, 0.5349712025714258, 0.645925544331418, 0.4397485010784333, 0.0, 0.6035436142733216, 0.7401548966695519, 0.27901830172394715, 0.8781545312615516, 0.15653466918823716, 0.0007045974986788797, 0.12723599990265033, 0.20456217807211186, 0.629064116632701, 0.0, 0.28005299927728255, 0.7801685900058292, 0.18456300860811892, 0.45049561474148564, 0.5454936336497989, 0.09604580812445981, nan, 0.13710408411674824, 0.4796006742513984, 0.4462842458656277, 0.08326967150496563, 0.895986048178371, 0.8195021626448673, 0.9584500399303424, 0.012936392680801627, 0.2073265351363334, 0.33898081262786167, 0.001953125] | [nan, 0.9020274819425876, 0.9445349555322843, 0.7582243269960229, 0.7115816733865559, 0.6725024693509964, 0.0, 0.7246456643278654, 0.8622486135230519, 0.3110091516810972, 0.9623436700743563, 0.19680908991904744, 0.0007616146230007616, 0.12754952669073874, 0.21308124680633622, 0.7971156997705671, 0.0, 0.28633004926108374, 0.9141247505929693, 0.2378134707174786, 0.6194613894575736, 0.6931652884469377, 0.10021135691137101, nan, 0.23282610607793122, 0.6372106624569679, 0.5951196595368374, 0.08333333333333333, 0.9429497472807788, 0.9053891766821857, 0.9799066410634253, 0.015400999993084419, 0.4015941387222737, 0.4187395086220052, 0.0023054755043227667] |
| 0.0374 | 185.0 | 9250 | 0.8500 | 0.4261 | 0.5005 | 0.8835 | [nan, 0.8434716396594377, 0.8889128861529657, 0.64763139635125, 0.6591157906879173, 0.47535724026979675, 0.0, 0.6200541090314029, 0.749098883299684, 0.3885603318916056, 0.8826306979452221, 0.1625372623759957, 0.0, 0.08342478113492818, 0.39311682016480853, 0.6380806324629313, 0.0, 0.22758620689655173, 0.7521926906996731, 0.17508827683615819, 0.39885397225233327, 0.46177267841868885, 0.09434473050163783, nan, 0.14603587039096305, 0.4816513597668971, 0.4814476488755492, 0.10313216195569137, 0.9008454163938971, 0.818761674014968, 0.9607465658084764, 0.006843049110009815, 0.22781082971393046, 0.39319498274838577, 0.0] | [nan, 0.9312379371557167, 0.9615129186420878, 0.8851103856793643, 0.708138414982727, 0.5974376852013248, 0.0, 0.7408003646396516, 0.865632836519292, 0.4343471718591973, 0.9632888776864283, 0.1811086506697644, 0.0, 0.08346345271786865, 0.41440981093510476, 0.772771967462233, 0.0, 0.22758620689655173, 0.8636544903580905, 0.22540343965451928, 0.6805965245365061, 0.5255778376023376, 0.09821051148372552, nan, 0.29167033160348316, 0.6413712269623599, 0.6117282927098878, 0.10321100917431193, 0.9538809235006024, 0.8999256213056552, 0.9828299535018667, 0.007811149300488931, 0.3540249319002187, 0.5106163536574456, 0.0] |
| 0.0312 | 190.0 | 9500 | 0.8366 | 0.4271 | 0.5011 | 0.8871 | [nan, 0.8383583648435936, 0.8893585287734083, 0.6242144991822743, 0.6523942357118304, 0.4788692097394316, 0.0, 0.6222419857542325, 0.7495553204266636, 0.3855623463905866, 0.8844989483482312, 0.21960980490245122, 0.0, 0.03046415766238201, 0.39732965009208104, 0.6460657345680039, 0.0, 0.16235120873726838, 0.7700717667212197, 0.16549668505209203, 0.49368437402670146, 0.46331160358515755, 0.09818201434967902, nan, 0.17114682596121936, 0.5135764361691169, 0.4659315099786098, 0.10504201680672269, 0.9002915149364578, 0.8254822330499596, 0.9604699442360148, 0.009150900078881995, 0.18152508685955304, 0.3910305542248974, 0.0] | [nan, 0.9283969805595136, 0.9598165282698033, 0.8767078936680537, 0.7034928688316159, 0.5971018534658068, 0.0, 0.7531496234804661, 0.8637248860666893, 0.4372826167394361, 0.9671062455718215, 0.2709187933350274, 0.0, 0.03047233336586318, 0.4409810935104752, 0.7719575300696244, 0.0, 0.1629310344827586, 0.9197306063880243, 0.22505492840366695, 0.6378119115673065, 0.5240721319571477, 0.10257855431872623, nan, 0.31753012578063156, 0.6519010213591494, 0.620018791797933, 0.10512232415902141, 0.9564885836615992, 0.9020052271173103, 0.9846571430752903, 0.010750271436573745, 0.32109550071789916, 0.49397091092787193, 0.0] |
| 0.0326 | 195.0 | 9750 | 0.8707 | 0.4272 | 0.4984 | 0.8861 | [nan, 0.8261617719245659, 0.8854917604179252, 0.6200336534230758, 0.660580250534605, 0.4498640011519204, 0.0, 0.6209593550575648, 0.7414471855553728, 0.34006487158979826, 0.8877441348891416, 0.23385327442671236, 0.0, 0.0332081728190374, 0.43202489229296315, 0.6361883362956504, 0.0, 0.1902200488997555, 0.7701853795262287, 0.15860467354944288, 0.49904952690861926, 0.46916590678565206, 0.09274864326815566, nan, 0.17989392302744164, 0.5138984658207596, 0.4806735961411222, 0.12204424103737604, 0.9008746454479115, 0.8221407501198316, 0.9611822232918834, 0.00719201457815406, 0.1665572869766945, 0.3941783403071965, 0.0] | [nan, 0.9130443736650012, 0.965349714444587, 0.8908710545070002, 0.7139688682285827, 0.5282453082331068, 0.0, 0.7562413022290538, 0.8614280959708963, 0.37758701497323566, 0.9644183610682487, 0.28246270011253494, 0.0, 0.03322923782570508, 0.46116504854368934, 0.7536724173892315, 0.0, 0.1916256157635468, 0.9292328194741926, 0.19972725206455033, 0.628904385763347, 0.5198814168108274, 0.0958433140763703, nan, 0.325930160964025, 0.6750836867831942, 0.6223677665395457, 0.12232415902140673, 0.9573260874322359, 0.8965555357795243, 0.9795764203309079, 0.007942545348925665, 0.29287602485138814, 0.49881075790503954, 0.0] |
| 0.0323 | 200.0 | 10000 | 0.8527 | 0.4345 | 0.5079 | 0.8871 | [nan, 0.8382620833593052, 0.8876413942052827, 0.6261839847460975, 0.6590417473673477, 0.48228357004057837, 0.0, 0.6202905105623743, 0.748344409080285, 0.39096811362981676, 0.8848513296576286, 0.2415092028297553, 0.0, 0.07068982339740462, 0.41356382978723405, 0.6474134903246308, 0.0, 0.3062052505966587, 0.7704161510118073, 0.16108765491481541, 0.49752934863906867, 0.4734664813860761, 0.09820294554789893, nan, 0.17153699720635862, 0.514555863370054, 0.4660696051735875, 0.08826901031715705, 0.8991007829081079, 0.829742650939299, 0.9612781430019607, 0.01112666737555973, 0.1861992251927429, 0.391388886866003, 0.0] | [nan, 0.9255583122183136, 0.9555184973850358, 0.8927561553139153, 0.7130378697969978, 0.6275811980710011, 0.0, 0.7474676455043131, 0.8545937449541465, 0.43523520560447965, 0.9672661630501664, 0.28627436744473084, 0.0, 0.0707036205718747, 0.47675012774655084, 0.7689381524189783, 0.0, 0.31600985221674877, 0.9278457312029238, 0.2055231456928555, 0.6363063556709445, 0.5255962863991213, 0.10240946878962942, nan, 0.30514996921453075, 0.6575213496395762, 0.6054551483999336, 0.08830275229357798, 0.9550074747938649, 0.8984159398975186, 0.9823971352874257, 0.013025497748978224, 0.3256981066248004, 0.49491941043060034, 0.0] |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.7.1+cu110
- Datasets 1.18.3
- Tokenizers 0.10.3
|
kenghin-chuah/dgn-SpaceInvadersNoFrameskip-v4 | kenghin-chuah | 2023-09-20T08:14:00Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-20T08:13:22Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 708.00 +/- 220.06
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kenghin-chuah -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kenghin-chuah -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kenghin-chuah
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ringgo/toph_beifong | ringgo | 2023-09-20T08:10:20Z | 0 | 0 | null | [
"feature-extraction",
"en",
"arxiv:1910.09700",
"license:openrail",
"region:us"
] | feature-extraction | 2023-09-20T08:02:50Z | ---
license: openrail
language:
- en
pipeline_tag: feature-extraction
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ivanleomk/bert_password_sniffer | ivanleomk | 2023-09-20T08:06:20Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"dataset:ivanleomk/prompt_injection_password",
"license:bsd",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-19T16:55:21Z | ---
license: bsd
datasets:
- ivanleomk/prompt_injection_password
---
This is a small dataset and my first attempt at finetuning. Very trigger happy input guard. |
romlinux/speecht5_tts_voxpopuli_nl | romlinux | 2023-09-20T08:05:44Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"romlinux_tts",
"generated_from_trainer",
"nl",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-09-20T06:56:50Z | ---
language:
- nl
license: mit
base_model: microsoft/speecht5_tts
tags:
- romlinux_tts
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.516 | 4.3 | 1000 | 0.4745 |
| 0.5009 | 8.6 | 2000 | 0.4626 |
| 0.4979 | 12.9 | 3000 | 0.4562 |
| 0.4928 | 17.2 | 4000 | 0.4546 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
Siddhanta19/nc-backup | Siddhanta19 | 2023-09-20T08:00:11Z | 0 | 0 | null | [
"code",
"en",
"region:us"
] | null | 2023-09-20T07:59:01Z | ---
language:
- en
tags:
- code
---
# This is what powered almost all of my colab
Mostly uses LZ4 compression, which means you'll need a specialized program to extract it, especially in windows.
For Windows users, I recommend using [7zip-zstd](https://github.com/mcmilk/7-Zip-zstd/releases/latest) (it's 7zip but with lz4 support and more)
For Linux users, use tar with liblz4-tool like this: `tar -xI lz4 -f repo.tar.lz4`
#### Credit to NoCrypt |
Kelum/distilbert-base-uncased-lora-text-classification | Kelum | 2023-09-20T07:58:45Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2023-09-20T07:58:43Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6525
- Accuracy: 0.7580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.5409 | 1.0 | 16838 | 0.6491 | 0.7569 |
| 0.6219 | 2.0 | 33676 | 0.6801 | 0.7557 |
| 0.5441 | 3.0 | 50514 | 0.6795 | 0.7259 |
| 0.5468 | 4.0 | 67352 | 0.6557 | 0.7305 |
| 0.6058 | 5.0 | 84190 | 0.6806 | 0.7156 |
| 0.5519 | 6.0 | 101028 | 0.6796 | 0.7225 |
| 0.5084 | 7.0 | 117866 | 0.7562 | 0.7259 |
| 0.5687 | 8.0 | 134704 | 0.6804 | 0.7317 |
| 0.4703 | 9.0 | 151542 | 0.6986 | 0.7282 |
| 0.5249 | 10.0 | 168380 | 0.6525 | 0.7580 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
MANMEET75/gpt2-fine-tuned-on-custom | MANMEET75 | 2023-09-20T07:50:42Z | 24 | 0 | peft | [
"peft",
"text-generation",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | text-generation | 2023-08-23T18:48:57Z | ---
library_name: peft
pipeline_tag: text-generation
base_model: gpt2
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0 |
classla/xlm-r-parla | classla | 2023-09-20T07:47:24Z | 100 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"parliament",
"bs",
"bg",
"ca",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"de",
"el",
"hu",
"is",
"it",
"lt",
"lv",
"no",
"pl",
"pt",
"ro",
"sr",
"sk",
"sl",
"es",
"sv",
"tr",
"uk",
"arxiv:2309.09783",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | fill-mask | 2023-09-05T07:03:49Z | ---
license: apache-2.0
language:
- bs
- bg
- ca
- hr
- cs
- da
- nl
- en
- et
- fi
- fr
- gl
- de
- el
- hu
- is
- it
- lt
- lv
- 'no'
- pl
- pt
- ro
- sr
- sk
- sl
- es
- sv
- tr
- uk
tags:
- parliament
inference: false
---
# Multilingual parliamentary model XLM-R-parla
This is the [XLM-R-large model](https://huggingface.co/xlm-roberta-large) additionally pre-trained on texts of parliamentary proceedings. Texts for the additional pre-training, 1.7 billion words in size, come from the [ParlaMint corpus](http://hdl.handle.net/11356/1486) and the [EuroParl corpus](https://www.statmt.org/europarl/).
The model is a result of the [ParlaMint project](https://www.clarin.eu/parlamint). The details on the model development are described in the following [paper](https://arxiv.org/abs/2309.09783):
```latex
@article{
Mochtak_Rupnik_Ljubešić_2023,
title={The ParlaSent multilingual training dataset for sentiment identification in parliamentary proceedings},
rights={All rights reserved},
url={http://arxiv.org/abs/2309.09783},
abstractNote={Sentiments inherently drive politics. How we receive and process information plays an essential role in political decision-making, shaping our judgment with strategic consequences both on the level of legislators and the masses. If sentiment plays such an important role in politics, how can we study and measure it systematically? The paper presents a new dataset of sentiment-annotated sentences, which are used in a series of experiments focused on training a robust sentiment classifier for parliamentary proceedings. The paper also introduces the first domain-specific LLM for political science applications additionally pre-trained on 1.72 billion domain-specific words from proceedings of 27 European parliaments. We present experiments demonstrating how the additional pre-training of LLM on parliamentary data can significantly improve the model downstream performance on the domain-specific tasks, in our case, sentiment detection in parliamentary proceedings. We further show that multilingual models perform very well on unseen languages and that additional data from other languages significantly improves the target parliament’s results. The paper makes an important contribution to multiple domains of social sciences and bridges them with computer science and computational linguistics. Lastly, it sets up a more robust approach to sentiment analysis of political texts in general, which allows scholars to study political sentiment from a comparative perspective using standardized tools and techniques.},
note={arXiv:2309.09783 [cs]},
number={arXiv:2309.09783},
publisher={arXiv},
author={Mochtak, Michal and Rupnik, Peter and Ljubešić, Nikola},
year={2023},
month={Sep},
language={en}
}
```
The first application of this model is the [XLM-R-parlasent model](https://huggingface.co/classla/xlm-r-parlasent), fine-tuned on the [ParlaSent dataset](http://hdl.handle.net/11356/1868) for the task of sentiment analysis in parliamentary proceedings.
|
ShaunThayil/bert-training-5 | ShaunThayil | 2023-09-20T07:45:09Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-20T07:44:16Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: training-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# training-5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0341
- Accuracy: 0.9952
- Precision: 0.9982
- Recall: 0.9841
- F1: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 0.25 | 151 | 0.0468 | 0.9923 | 1.0 | 0.9717 | 0.9856 |
| No log | 0.5 | 302 | 0.0497 | 0.9908 | 0.9840 | 0.9823 | 0.9832 |
| No log | 0.75 | 453 | 0.0571 | 0.9918 | 1.0 | 0.9699 | 0.9847 |
| No log | 1.0 | 604 | 0.0319 | 0.9961 | 1.0 | 0.9858 | 0.9929 |
| 0.0471 | 1.25 | 755 | 0.0353 | 0.9952 | 0.9982 | 0.9841 | 0.9911 |
| 0.0471 | 1.5 | 906 | 0.0346 | 0.9942 | 0.9929 | 0.9858 | 0.9893 |
| 0.0471 | 1.75 | 1057 | 0.0678 | 0.9899 | 0.9772 | 0.9858 | 0.9815 |
| 0.0471 | 2.0 | 1208 | 0.0380 | 0.9952 | 1.0 | 0.9823 | 0.9911 |
| 0.0156 | 2.25 | 1359 | 0.0362 | 0.9952 | 1.0 | 0.9823 | 0.9911 |
| 0.0156 | 2.5 | 1510 | 0.0388 | 0.9942 | 0.9946 | 0.9841 | 0.9893 |
| 0.0156 | 2.75 | 1661 | 0.0418 | 0.9952 | 1.0 | 0.9823 | 0.9911 |
| 0.0156 | 3.0 | 1812 | 0.0333 | 0.9952 | 0.9982 | 0.9841 | 0.9911 |
| 0.0121 | 3.24 | 1963 | 0.0326 | 0.9952 | 0.9982 | 0.9841 | 0.9911 |
| 0.0121 | 3.49 | 2114 | 0.0309 | 0.9957 | 0.9982 | 0.9858 | 0.9920 |
| 0.0121 | 3.74 | 2265 | 0.0311 | 0.9952 | 0.9982 | 0.9841 | 0.9911 |
| 0.0121 | 3.99 | 2416 | 0.0344 | 0.9952 | 0.9982 | 0.9841 | 0.9911 |
| 0.0084 | 4.24 | 2567 | 0.0334 | 0.9952 | 0.9982 | 0.9841 | 0.9911 |
| 0.0084 | 4.49 | 2718 | 0.0327 | 0.9952 | 0.9982 | 0.9841 | 0.9911 |
| 0.0084 | 4.74 | 2869 | 0.0336 | 0.9952 | 0.9982 | 0.9841 | 0.9911 |
| 0.0084 | 4.99 | 3020 | 0.0341 | 0.9952 | 0.9982 | 0.9841 | 0.9911 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.2.0.dev20230913+cu121
- Tokenizers 0.13.3
|
trieudemo11/llama_7b_attrb_cate_4m_4 | trieudemo11 | 2023-09-20T07:44:45Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-20T07:44:27Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
BelightPills/Belight | BelightPills | 2023-09-20T07:21:58Z | 0 | 0 | null | [
"region:us"
] | null | 2023-09-20T07:21:05Z | Belight is a weight reduction item can be the most ideal choice to diminish weight and get better by and large. Peruse more Belight!
Buy now!! Click the Link Below for more information and get 50% discount now !! hurry up !!
Read More: https://www.boxdrug.com/BeliInd
https://sites.google.com/view/belight-pills/home
➢Product Name — Belight
➢Category – Weight loss
➢ Composition — Natural Organic Compound
➢ Side-Effects—NA
➢Final Rating: — 4.8
➢ Availability — Online
➢Offers & Discounts; SAVE TODAY! SHOP NOW TO buy SPECIAL OFFER!!!
What Is Belight?
Belightis a non-poisonous, vegan weight reduction pill and its second most pivotal property is to fix stoutness related nervousness. As the name proposes this item contains such fixings which are essential to speed up ketosis. Also, its makers say that it is 100 percent liberated from GMO, and artificially based fixings. This item is exceptionally productive in expanding energy and further developing temperament and digestion.
Buy now!! Click the Link Below for more information and get 50% discount now !! hurry up !!
Read More: https://www.boxdrug.com/BeliInd
https://sites.google.com/view/belight-pills/home
Belight Belight Pills Belight capsule Belight Tablets Belight Price Belight reviews Belight Ingredients Belight Benefits Belight Side effects Belight गोलियाँ Belight कैप्सूल Belight गोलियाँ Belight कीमत Belight समीक्षा Belight अवयव Belight लाभ Belight दुष्प्रभाव Belight कैप्सूल की कीमत Belight कैप्सूल समीक्षा Belight संयोजन Belight शिकायत Belight कहां खरीदें Belight कैसे इस्तेमाल करे Belight लागत Belight काम करता है Belight मंच Belight मूल Belight फार्मेसी
|
Ken12138/Taxi-v3-Qlearning | Ken12138 | 2023-09-20T07:12:11Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-20T07:12:04Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-Qlearning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Ken12138/Taxi-v3-Qlearning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kataragi/clear_anime | kataragi | 2023-09-20T07:10:33Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-20T07:03:10Z | ---
license: creativeml-openrail-m
---
</p>
Immunity-mix
- このモデルは現在テストモデルです。
- 現在広く一般に配布されているモデルとは性能差があります。ご了承ください。
概要
- Immunity-mixはImmunity-Diffusionをベースにcafe-InstagramとF222、および自作画風LoRAをマージして作られたモデルです。
- ライセンスはcreativeml-openrail-mです。出力画像の商用利用、商用生成サービスへの搭載に関する制限はありません。ご自由にお使いください。
- 今モデルは試作であり、今後さらなる改良を見込んでいます。
サンプルイメージ
Immunity-mix_testV1



</p> |
elemtopos/q-FrozenLake-v1-4x4-noSlippery | elemtopos | 2023-09-20T06:52:32Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-20T06:52:30Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="elemtopos/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SteveImmanuel/ppo-LunarLander-v2 | SteveImmanuel | 2023-09-20T06:49:01Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-20T06:48:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.27 +/- 18.75
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kapardhi/llama2-qlora-finetunined-french | kapardhi | 2023-09-20T06:46:03Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-20T06:20:34Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
sudhanshu753/setfit-model | sudhanshu753 | 2023-09-20T06:44:38Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-09-20T06:44:19Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# sudhanshu753/setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("sudhanshu753/setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
ShaunThayil/bert-training-4 | ShaunThayil | 2023-09-20T06:43:48Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-20T06:38:12Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: training-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# training-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0335
- Accuracy: 0.9931
- Precision: 0.9982
- Recall: 0.9875
- F1: 0.9928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 0.25 | 85 | 0.0511 | 0.9819 | 0.9821 | 0.9804 | 0.9813 |
| No log | 0.5 | 170 | 0.0752 | 0.9836 | 0.9982 | 0.9679 | 0.9828 |
| No log | 0.75 | 255 | 0.0550 | 0.9888 | 0.9841 | 0.9929 | 0.9885 |
| 0.1064 | 1.0 | 340 | 0.0383 | 0.9923 | 0.9964 | 0.9875 | 0.9919 |
| 0.1064 | 1.25 | 425 | 0.0485 | 0.9923 | 0.9982 | 0.9857 | 0.9919 |
| 0.1064 | 1.5 | 510 | 0.0468 | 0.9914 | 0.9964 | 0.9857 | 0.9910 |
| 0.1064 | 1.76 | 595 | 0.0477 | 0.9914 | 1.0 | 0.9822 | 0.9910 |
| 0.0322 | 2.01 | 680 | 0.0506 | 0.9931 | 1.0 | 0.9857 | 0.9928 |
| 0.0322 | 2.26 | 765 | 0.0455 | 0.9914 | 0.9928 | 0.9893 | 0.9911 |
| 0.0322 | 2.51 | 850 | 0.0466 | 0.9914 | 0.9946 | 0.9875 | 0.9911 |
| 0.0322 | 2.76 | 935 | 0.0491 | 0.9931 | 1.0 | 0.9857 | 0.9928 |
| 0.0217 | 3.01 | 1020 | 0.0517 | 0.9923 | 0.9964 | 0.9875 | 0.9919 |
| 0.0217 | 3.26 | 1105 | 0.0455 | 0.9931 | 1.0 | 0.9857 | 0.9928 |
| 0.0217 | 3.51 | 1190 | 0.0338 | 0.9931 | 0.9982 | 0.9875 | 0.9928 |
| 0.0217 | 3.76 | 1275 | 0.0385 | 0.9940 | 1.0 | 0.9875 | 0.9937 |
| 0.024 | 4.01 | 1360 | 0.0376 | 0.9931 | 1.0 | 0.9857 | 0.9928 |
| 0.024 | 4.26 | 1445 | 0.0332 | 0.9931 | 0.9982 | 0.9875 | 0.9928 |
| 0.024 | 4.51 | 1530 | 0.0343 | 0.9923 | 0.9946 | 0.9893 | 0.9920 |
| 0.024 | 4.76 | 1615 | 0.0335 | 0.9931 | 0.9982 | 0.9875 | 0.9928 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.2.0.dev20230913+cu121
- Tokenizers 0.13.3
|
OpenBuddy/openbuddy-coder-34b-v11-bf16 | OpenBuddy | 2023-09-20T06:41:21Z | 19 | 7 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-29T12:55:45Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)

# Copyright Notice
This model is built upon Meta's LLaMA series of models and is subject to Meta's licensing agreement.
This model is intended for use only by individuals who have obtained approval from Meta and are eligible to download LLaMA.
If you have not obtained approval from Meta, you must visit the https://ai.meta.com/llama/ page, read and agree to the model's licensing agreement, submit an application, and wait for approval from Meta before downloading the model from this page.
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
OpenBuddy/openbuddy-codellama2-34b-v11.1-bf16 | OpenBuddy | 2023-09-20T06:40:58Z | 2,608 | 11 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-08T02:21:55Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)

# Copyright Notice
This model is built upon Meta's LLaMA series of models and is subject to Meta's licensing agreement.
This model is intended for use only by individuals who have obtained approval from Meta and are eligible to download LLaMA.
If you have not obtained approval from Meta, you must visit the https://ai.meta.com/llama/ page, read and agree to the model's licensing agreement, submit an application, and wait for approval from Meta before downloading the model from this page.
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
OpenBuddy/openbuddy-openllama-7b-v12-bf16 | OpenBuddy | 2023-09-20T06:40:33Z | 3,800 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-19T04:26:50Z | ---
license: apache-2.0
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)

## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
Gurusha/dreembooth_wearing_ring | Gurusha | 2023-09-20T06:25:18Z | 3 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-09-20T05:04:29Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a human sks hand wearing a ring on the index finger
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Gurusha/dreembooth_wearing_ring
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a human sks hand wearing a ring on the index finger using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Gayathri142214002/bert-base-uncased-finetuned-cola | Gayathri142214002 | 2023-09-20T06:22:05Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-20T06:00:31Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Gayathri142214002/bert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Gayathri142214002/bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1509
- Validation Loss: 0.5207
- Train Matthews Correlation: 0.5906
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.4736 | 0.4637 | 0.5075 | 0 |
| 0.2584 | 0.4364 | 0.5962 | 1 |
| 0.1509 | 0.5207 | 0.5906 | 2 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jtatman/nyt87_07 | jtatman | 2023-09-20T06:18:42Z | 129 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-20T03:49:09Z | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: nyt87_07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nyt87_07
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.9838 | 0.18 | 2000 | 5.4214 |
| 4.5676 | 0.37 | 4000 | 4.9018 |
| 4.237 | 0.55 | 6000 | 4.5389 |
| 4.0824 | 0.73 | 8000 | 4.2746 |
| 4.0047 | 0.91 | 10000 | 4.1538 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Dans-Archive/Dans-RetroRodeo-13b-lora | Dans-Archive | 2023-09-20T06:05:23Z | 0 | 1 | null | [
"en",
"region:us"
] | null | 2023-09-20T05:20:04Z | ---
language:
- en
---
# What is the model for?
This model is proficient in crafting text-based adventure games. It can both concise replies and more expansive, novel-like descriptions. The ability to alternate between these two response styles can be triggered by a distinct system message.
# What's in the sausage?
This model was trained on [Holodeck-1](https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1) using a custom version of the floyd and cys datasets from the skein text adventure dataset. The dataset has been improved over what was used in my previous model "Dans-CreepingSenseOfDoom" and this time presented in the more traditional adventure mode format.
## Prompt format and examples:
#### Format description:
Traditional adventure format with a description of the genre, tags, and other misc information at the top followed by an optional introductory message. User messages are prefixed with two new lines and "> ", model messages are one new line after the last user message or the description message if an introductory message is desired.
#### Examples of shorter responses:
```
[Genres: Science Fiction]
[Tags: humor, old school]
[Mode: Adventure]
[Description: A puzzle about committing acts of financial skulduggery and exploiting ridiculous magical items.]
[Misc: Writing era: 1993]
[Intro]
You are sitting at a desk in the offices of the Trans-Galactic Financial Services Corporation. You have been here for hours, trying to figure out how you can get your hands on some money that is rightfully yours. Unfortunately, it seems as if there's no way you will ever be able to do this without getting caught by the company president, Mr. Vance.
> You lean back in your chair
You stretch your arms above your head and yawn. It has been a long day already.
> You check your terminal
Your computer screen shows various bits of information about TGFC stocks and bonds.
> You check the news
The front page of today's newspaper reports that "TGFC" stands for "Too Greedy For Cash."
```
```
Two men, one a brusque bowling ball; the other a murderous dandy, discuss their fate and the fate of the world. They've chosen someone to send. He's clever, but hopefully not too clever.
Adventure mode, Varied response length, Creative
> You look around
You are in an empty room with no furniture except for a table. A man is sitting at the table, writing something on a piece of paper.
> You talk to him
"Hello," you say politely. "I was wondering if I could ask you some questions."
The man looks up from his writing. "Who are you?" he asks suspiciously.
"My name is..."
> You tell him your name is Dan
"Dan? That doesn't sound like a very nice name."
> You take offense
"It's my name!"
"Well it sounds stupid! And what do you want anyway?"
> You ask him where you are
"This is the place where we choose who gets sent next."
> You ask where people are getting sent
"To the future, obviously."
```
#### Examples of longer novel like responses:
```
[Genres: grimdark fantasy] [Tags: Fantasy, Grimdark, Serious, Villain Protagonist, War] [Description: This story is definitely more "story" than "game", however there are two distinct paths that can be followed, each with their own pitfalls and additional choices. This story can get long, and assuming you're surviving, years do pass in it and you and your environment will change.] [First message intro] [Mode: Interactive Storyteller]
It was a dark night when I received my first letter from my father. It had been sent by messenger bird and delivered to me at midnight. As I read through its contents, I could feel myself growing numb as I realized what he had asked of me. My father wanted me to go to war.
I knew this day would come eventually. He has always said that his greatest wish was to see me fight alongside him. However, now that the time had finally arrived, I found myself unable to respond immediately. This was going to be a huge decision, one that would affect both our lives forevermore.
My name is Kronos Vang. I am the son of the great warlord of the kingdom of Darrow, Lord Gendall Vang. I have lived my entire life here in the capital city of Nadorf, training under the tutelage of my father's most trusted general, Commander Eldon Blackwell. Now, at the age of 16, I find myself standing before a crossroads, one that will determine whether or not I follow in my father's footsteps.
> You go to war
As soon as I finished reading the letter, I made up my mind. If my father wanted me to join him in battle so badly, then I would do just that. I didn't want to disappoint him.
The next morning, I woke early and went downstairs to speak with Commander Blackwell about my future plans. When I entered his office, I saw him sitting behind his desk, looking very concerned.
"Kronos," he began slowly, "your father wants you to go to war."
"Yes sir," I replied. "He sent me a letter last night explaining everything."
"Then why did you come talk to me?" he asked.
"Because I need to tell someone," I answered truthfully. "And since you were the only person who came close to being like a father to me..."
Commander Blackwell smiled slightly. "Well, let's sit down and discuss this together," he said, motioning towards a chair across from his desk. Once we sat down, he continued speaking. "Look kiddo, I don't think anyone knows better than you how much your father loves you. And I know he wouldn't ask anything of you unless he thought it was absolutely necessary. But, I also know that you've never really shown any interest in becoming a soldier. In fact, you'd rather spend all your free time learning magic spells instead of practicing swordplay."
"But sir," I interrupted, "that doesn't mean I won't become good enough to fight someday! All I need is practice!"
"That may be true," he admitted, "but I still believe that you should stay here and continue your studies. Your father needs soldiers, yes, but he also needs magicians. Without them, he cannot win the wars he must fight. I promise you, if you remain here, I will train you until you are able to defend yourself against even the strongest foe."
"No sir," I insisted. "If my father asks me to go to war, then I must obey him. I will not betray him again."
"Again? What do you mean 'again'?" he asked curiously.
"When I was younger, I used to run away every chance I got," I explained. "Even though I knew it wasn't right, I couldn't help myself. Every time I ran off, I felt like I was abandoning my family. That's why I stayed here after my mother died. Because I didn't want to leave them ever again."
"Oh kiddo," he sighed sadly. "I'm sorry. I shouldn't have kept such a secret from you."
"What secret?" I asked confused.
"Your mother," he revealed. "She was a powerful sorceress. She cast a spell upon herself and your father that would make her die within five years of giving birth to you. After she passed away, your father became heartbroken. So, he cast another spell over himself that would prevent him from falling in love again. He believed that he would never marry again, nor would he ever have children."
"Wait," I interjected. "So, he didn't actually love me?"
"Of course he loved you!" he exclaimed. "How could you possibly doubt that? He raised you single-handedly, gave you everything you needed, and protected you whenever danger threatened. No, no, Kronos, he loves you deeply. But, he simply believes that he can protect you best by keeping you out of harm's way. That's why he forbade you from leaving home. That's why he refused to allow you to learn magic. He fears losing you too."
"But... what does this have to do with me going to war?" I asked.
"Nothing," he admitted. "I suppose I was merely trying to convince you to stay. But, if you truly desire to go, then I guess there isn't much I can say or do to stop you."
"Thank you sir," I told him sincerely. "I appreciate everything you've taught me these past ten years. Even though I'm leaving, I hope you know that I will never forget what you've done for me."
"Don't worry about me," he assured me. "Just remember what I said about your father. He loves you very much. Don't ever forget that."
After saying goodbye to Commander Blackwell, I returned to my room and packed up all my belongings. Then, I walked downstairs and left the castle without telling anyone else. I figured it was better to leave quietly while everyone was sleeping.
Once outside, I headed straight for the nearest tavern. There, I met up with my friends and informed them of my plan to enlist in the army. They weren't happy about it, but none of them tried to dissuade me either. Instead, they promised to meet me back here tomorrow evening, along with whatever supplies they could gather for us.
Now, I'm waiting impatiently for them to arrive. I wonder what kind of adventures lie ahead for me...
```
# Some quick and dirty training details:
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Sequence length: 4096
- \# of epochs: 4
- Training time: 2 hours
- Hardware: 1x RTX 3090
- Training type: QLoRA
- PEFT R/A: 32/32
# Credits:
### Holodeck-1:
Thank you to Mr. Seeker and the Kobold AI team for the wonderful model Holodeck-1
[Holodeck-1 Huggingface page](https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1)
### Skein Text Adventure Data:
Thank you to the [Kobold AI](https://huggingface.co/KoboldAI) community for curating the Skein dataset, which is pivotal to this model's capabilities. |
isashap/final | isashap | 2023-09-20T05:49:42Z | 32 | 0 | peft | [
"peft",
"text-generation",
"region:us"
] | text-generation | 2023-09-20T03:56:22Z | ---
library_name: peft
pipeline_tag: text-generation
---
widget:
- text: "When I worked in the kitchen"
- example_title: "example1"
- text: "When I worked in retail"
- - example_title: "example2"
- text: "When I worked in the laundry room"
- - example_title: "example3"
## Training procedure
### Framework versions
- PEFT 0.5.0 |
ananyaaaaa/my-pet-dog | ananyaaaaa | 2023-09-20T05:44:30Z | 29 | 0 | diffusers | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-09-20T05:38:57Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by ananyaaaaa following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: RNBGU-9
Sample pictures of this concept:
.jpg)
|
Siddhanta19/sd-checkpoint-backup | Siddhanta19 | 2023-09-20T05:44:18Z | 3 | 1 | diffusers | [
"diffusers",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-16T06:20:26Z | ---
license: gpl-3.0
language:
- en
library_name: diffusers
duplicated_from: shiowo/backups
---
credit to shiowo/backups. This is just a fork of the repo
my backups
if you want me to remove a model you can leave a comment in community section
feel free to download contains any v4 and 4.5 and more soon :D
shiowo/backups/content/andite for any v4 Sorry if its confusing
you can run sd by downloading [my automatic111 here](https://huggingface.co/shiowo/backups/blob/main/fastsdwebui.zip)
to do: change xformers in requrement.txt to a more recent version and move some pre downloaded lora from control net to the lora folder add canny and negative emmdedings
the sd zip contains waifu diffusion, stability ai, nyanmix vae for models any v5, reliberate, nyanmix it also contains openpose, lineart, line art anime, and shuffle.
this also have add detail lora and 4x sharp upscaler.
v4 to do:
add more quality of life extentions fix some bugs and upgrade pytorch lighing version |
mohsen2/pytorch_model.bin | mohsen2 | 2023-09-20T05:37:33Z | 212 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:HooshvareLab/bert-fa-base-uncased-sentiment-snappfood",
"base_model:finetune:HooshvareLab/bert-fa-base-uncased-sentiment-snappfood",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-18T11:23:24Z | ---
license: apache-2.0
base_model: HooshvareLab/bert-fa-base-uncased-sentiment-snappfood
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: pytorch_model.bin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pytorch_model.bin
This model is a fine-tuned version of [HooshvareLab/bert-fa-base-uncased-sentiment-snappfood](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-sentiment-snappfood) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6156
- Accuracy: 0.8674
- F1: 0.8674
- Precision: 0.8683
- Recall: 0.8674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2268 | 1.0 | 2363 | 0.3416 | 0.8656 | 0.8656 | 0.8656 | 0.8656 |
| 0.1624 | 2.0 | 4726 | 0.4252 | 0.86 | 0.8600 | 0.8600 | 0.86 |
| 0.0991 | 3.0 | 7089 | 0.6156 | 0.8674 | 0.8674 | 0.8683 | 0.8674 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
learn3r/longt5_xl_summ_screen_bp_10 | learn3r | 2023-09-20T05:31:48Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:learn3r/summ_screen_fd_bp",
"base_model:google/long-t5-tglobal-xl",
"base_model:finetune:google/long-t5-tglobal-xl",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-19T13:57:40Z | ---
license: apache-2.0
base_model: google/long-t5-tglobal-xl
tags:
- generated_from_trainer
datasets:
- learn3r/summ_screen_fd_bp
metrics:
- rouge
model-index:
- name: longt5_xl_summ_screen_bp_10
results:
- task:
name: Summarization
type: summarization
dataset:
name: learn3r/summ_screen_fd_bp
type: learn3r/summ_screen_fd_bp
metrics:
- name: Rouge1
type: rouge
value: 22.9554
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5_xl_summ_screen_bp_10
This model is a fine-tuned version of [google/long-t5-tglobal-xl](https://huggingface.co/google/long-t5-tglobal-xl) on the learn3r/summ_screen_fd_bp dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3323
- Rouge1: 22.9554
- Rouge2: 6.4509
- Rougel: 19.7437
- Rougelsum: 20.923
- Gen Len: 497.2456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.4559 | 0.97 | 14 | 2.0707 | 11.7833 | 1.6011 | 11.1858 | 10.3025 | 511.0 |
| 1.6238 | 1.95 | 28 | 1.5287 | 19.0489 | 4.687 | 16.6504 | 17.1808 | 511.0 |
| 1.3964 | 2.99 | 43 | 1.3520 | 21.9994 | 5.8519 | 18.9231 | 19.958 | 511.0 |
| 1.2538 | 3.97 | 57 | 1.3323 | 22.9554 | 6.4509 | 19.7437 | 20.923 | 497.2456 |
| 1.277 | 4.94 | 71 | 1.5462 | 14.6326 | 3.6509 | 12.4805 | 13.5001 | 507.2278 |
| 1.0071 | 5.98 | 86 | 1.3604 | 29.5352 | 9.9544 | 22.1073 | 28.1204 | 429.7722 |
| 0.8685 | 6.96 | 100 | 1.4361 | 31.0337 | 10.6724 | 22.3815 | 29.6325 | 451.7840 |
| 0.7498 | 8.0 | 115 | 1.5302 | 28.433 | 8.4887 | 21.3588 | 26.6817 | 473.8964 |
| 0.6226 | 8.97 | 129 | 1.6289 | 37.251 | 12.8214 | 24.8704 | 36.0027 | 358.7663 |
| 0.5558 | 9.74 | 140 | 1.5811 | 35.4657 | 12.0036 | 24.7787 | 34.3775 | 284.0266 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
zrx-kishore/falcon-40b-4bit | zrx-kishore | 2023-09-20T05:29:51Z | 0 | 0 | peft | [
"peft",
"base_model:tiiuae/falcon-40b",
"base_model:adapter:tiiuae/falcon-40b",
"region:us"
] | null | 2023-07-17T09:27:27Z | ---
library_name: peft
base_model: tiiuae/falcon-40b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
BeeZee1/SIT_qa_model_new | BeeZee1 | 2023-09-20T05:17:46Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-09-20T05:04:47Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: BeeZee1/SIT_qa_model_new
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BeeZee1/SIT_qa_model_new
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5751
- Validation Loss: 1.7625
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.3898 | 2.1800 | 0 |
| 1.8220 | 1.7625 | 1 |
| 1.5751 | 1.7625 | 2 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Veer15/llama2-science-mcq-solver | Veer15 | 2023-09-20T05:15:13Z | 7 | 0 | peft | [
"peft",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"doi:10.57967/hf/1038",
"region:us"
] | null | 2023-08-29T10:04:49Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
iamshnoo/alpaca-2-70b-swahili | iamshnoo | 2023-09-20T05:10:55Z | 2 | 0 | peft | [
"peft",
"sw",
"en",
"dataset:iamshnoo/alpaca-cleaned-swahili",
"base_model:meta-llama/Llama-2-70b-hf",
"base_model:adapter:meta-llama/Llama-2-70b-hf",
"license:cc-by-4.0",
"region:us"
] | null | 2023-09-10T21:05:56Z | ---
language:
- sw
- en
license: cc-by-4.0
library_name: peft
datasets:
- iamshnoo/alpaca-cleaned-swahili
base_model: meta-llama/Llama-2-70b-hf
---
This represents the PEFT weights only. The base model is LLaMA 2. Instruction finetuning was done using 4 bit QLoRA on a single A100 GPU with the PEFT config as given below. The dataset used for this instruction finetuning process is a translated version of the cleaned alpaca dataset (translated using NLLB-1.3B).
Do note that this model might have inferior performance on some language specific tasks compared to full finetuning or a different base model trained with more language specific data.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0 |
iamshnoo/alpaca-2-7b-persian | iamshnoo | 2023-09-20T05:10:43Z | 7 | 0 | peft | [
"peft",
"fa",
"en",
"dataset:iamshnoo/alpaca-cleaned-persian",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:cc-by-4.0",
"region:us"
] | null | 2023-08-01T22:18:42Z | ---
language:
- fa
- en
license: cc-by-4.0
library_name: peft
datasets:
- iamshnoo/alpaca-cleaned-persian
metrics:
- accuracy
base_model: meta-llama/Llama-2-7b-hf
---
This represents the PEFT weights only. The base model is LLaMA 2. Instruction finetuning was done using 4 bit QLoRA on a single A100 GPU with the PEFT config as given below. The dataset used for this instruction finetuning process is a translated version of the cleaned alpaca dataset (translated using NLLB-1.3B).
Do note that this model might have inferior performance on some language specific tasks compared to full finetuning or a different base model trained with more language specific data.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0 |
iamshnoo/alpaca-2-70b-greek | iamshnoo | 2023-09-20T05:10:26Z | 2 | 0 | peft | [
"peft",
"base_model:meta-llama/Llama-2-70b-hf",
"base_model:adapter:meta-llama/Llama-2-70b-hf",
"region:us"
] | null | 2023-09-10T20:54:03Z | ---
library_name: peft
base_model: meta-llama/Llama-2-70b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
iamshnoo/alpaca-2-13b-persian | iamshnoo | 2023-09-20T05:10:18Z | 4 | 1 | peft | [
"peft",
"fa",
"en",
"dataset:iamshnoo/alpaca-cleaned-persian",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"license:cc-by-4.0",
"region:us"
] | null | 2023-09-10T21:12:13Z | ---
language:
- fa
- en
license: cc-by-4.0
library_name: peft
datasets:
- iamshnoo/alpaca-cleaned-persian
metrics:
- accuracy
base_model: meta-llama/Llama-2-13b-hf
---
This represents the PEFT weights only. The base model is LLaMA 2. Instruction finetuning was done using 4 bit QLoRA on a single A100 GPU with the PEFT config as given below. The dataset used for this instruction finetuning process is a translated version of the cleaned alpaca dataset (translated using NLLB-1.3B).
Do note that this model might have inferior performance on some language specific tasks compared to full finetuning or a different base model trained with more language specific data.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0 |
iamshnoo/alpaca-2-7b-greek | iamshnoo | 2023-09-20T05:10:10Z | 3 | 1 | peft | [
"peft",
"el",
"en",
"dataset:iamshnoo/alpaca-cleaned-greek",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:cc-by-4.0",
"region:us"
] | null | 2023-09-10T20:59:44Z | ---
language:
- el
- en
license: cc-by-4.0
library_name: peft
datasets:
- iamshnoo/alpaca-cleaned-greek
base_model: meta-llama/Llama-2-7b-hf
---
This represents the PEFT weights only. The base model is LLaMA 2. Instruction finetuning was done using 4 bit QLoRA on a single A100 GPU with the PEFT config as given below. The dataset used for this instruction finetuning process is a translated version of the cleaned alpaca dataset (translated using NLLB-1.3B).
Do note that this model might have inferior performance on some language specific tasks compared to full finetuning or a different base model trained with more language specific data.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0 |
iamshnoo/alpaca-2-13b-greek | iamshnoo | 2023-09-20T05:10:01Z | 2 | 1 | peft | [
"peft",
"en",
"el",
"dataset:iamshnoo/alpaca-cleaned-greek",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"license:cc-by-4.0",
"region:us"
] | null | 2023-09-10T20:59:54Z | ---
language:
- en
- el
license: cc-by-4.0
library_name: peft
datasets:
- iamshnoo/alpaca-cleaned-greek
base_model: meta-llama/Llama-2-13b-hf
---
This represents the PEFT weights only. The base model is LLaMA 2. Instruction finetuning was done using 4 bit QLoRA on a single A100 GPU with the PEFT config as given below. The dataset used for this instruction finetuning process is a translated version of the cleaned alpaca dataset (translated using NLLB-1.3B).
Do note that this model might have inferior performance on some language specific tasks compared to full finetuning or a different base model trained with more language specific data.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0 |
iamshnoo/alpaca-2-70b-persian | iamshnoo | 2023-09-20T05:09:55Z | 3 | 0 | peft | [
"peft",
"fa",
"en",
"dataset:iamshnoo/alpaca-cleaned-persian",
"base_model:meta-llama/Llama-2-70b-hf",
"base_model:adapter:meta-llama/Llama-2-70b-hf",
"license:cc-by-4.0",
"region:us"
] | null | 2023-09-10T21:12:22Z | ---
language:
- fa
- en
license: cc-by-4.0
library_name: peft
datasets:
- iamshnoo/alpaca-cleaned-persian
metrics:
- accuracy
base_model: meta-llama/Llama-2-70b-hf
---
This represents the PEFT weights only.
The base model is LLaMA 2.
Instruction finetuning was done using 4 bit QLoRA on a single A100 GPU with the PEFT config as given below.
The dataset used for this instruction finetuning process is a translated version of the cleaned alpaca dataset (translated using NLLB-1.3B).
Do note that this model might have inferior performance on some language specific tasks compared to full finetuning
or a different base model trained with more language specific data.
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0 |
iamshnoo/alpaca-2-7b-bengali | iamshnoo | 2023-09-20T05:09:48Z | 5 | 0 | peft | [
"peft",
"bn",
"en",
"dataset:iamshnoo/alpaca-cleaned-bengali",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:cc-by-4.0",
"region:us"
] | null | 2023-09-10T20:27:53Z | ---
language:
- bn
- en
license: cc-by-4.0
library_name: peft
datasets:
- iamshnoo/alpaca-cleaned-bengali
base_model: meta-llama/Llama-2-7b-hf
---
This represents the PEFT weights only. The base model is LLaMA 2. Instruction finetuning was done using 4 bit QLoRA on a single A100 GPU with the PEFT config as given below. The dataset used for this instruction finetuning process is a translated version of the cleaned alpaca dataset (translated using NLLB-1.3B).
Do note that this model might have inferior performance on some language specific tasks compared to full finetuning or a different base model trained with more language specific data.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0 |
ZahrizhalAli/phi-1_5-code-generation | ZahrizhalAli | 2023-09-20T05:02:32Z | 0 | 0 | transformers | [
"transformers",
"code",
"text-generation",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-19T02:34:17Z | ---
library_name: transformers
pipeline_tag: text-generation
tags:
- code
--- |
Prajna1999/peerlist-musicgen-small | Prajna1999 | 2023-09-20T05:00:04Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"musicgen",
"text-to-audio",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-09-20T04:28:55Z | ---
inference: false
tags:
- musicgen
license: cc-by-nc-4.0
---
# MusicGen - Small - 300M
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
Four checkpoints are released:
- [**small** (this checkpoint)](https://huggingface.co/facebook/musicgen-small)
- [medium](https://huggingface.co/facebook/musicgen-medium)
- [large](https://huggingface.co/facebook/musicgen-large)
- [melody](https://huggingface.co/facebook/musicgen-melody)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main:
```
pip install git+https://github.com/huggingface/transformers.git
```
2. Run the following Python code to generate text-conditional audio samples:
```py
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=256)
```
3. Listen to the audio samples either in an ipynb notebook:
```py
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
```py
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
```
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("small")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284).
**Citation details:**
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity |
|---|---|---|---|---|
| **facebook/musicgen-small** | 4.88 | 1.42 | 0.27 | - |
| facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - |
| facebook/musicgen-large | 5.48 | 1.37 | 0.28 | - |
| facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 |
More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. |
googcheng/recipe | googcheng | 2023-09-20T04:52:14Z | 1 | 0 | peft | [
"peft",
"llama",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:adapter:NousResearch/Llama-2-7b-hf",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2023-09-15T05:31:41Z | ---
library_name: peft
base_model: NousResearch/Llama-2-7b-hf
---
## title
a recipe classify from https://github.com/OpenPipe/OpenPipe
lora fine-tune based on llama2-7b, 1hour on 2*3090
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Souvik123/bankstatementmodelver7 | Souvik123 | 2023-09-20T04:52:03Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-09-20T04:50:25Z | ---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: bankstatementmodelver7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bankstatementmodelver7
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0981 | 1.0 | 532 | 0.0672 |
| 0.0425 | 2.0 | 1064 | 0.0565 |
| 0.0376 | 3.0 | 1596 | 0.0546 |
| 0.026 | 4.0 | 2128 | 0.0309 |
| 0.0258 | 5.0 | 2660 | 0.0258 |
| 0.0211 | 6.0 | 3192 | 0.0397 |
| 0.0184 | 7.0 | 3724 | 0.0549 |
| 0.0222 | 8.0 | 4256 | 0.0354 |
| 0.0191 | 9.0 | 4788 | 0.0216 |
| 0.0209 | 10.0 | 5320 | 0.0403 |
| 0.0142 | 11.0 | 5852 | 0.0325 |
| 0.0143 | 12.0 | 6384 | 0.0317 |
| 0.0139 | 13.0 | 6916 | 0.0337 |
| 0.0146 | 14.0 | 7448 | 0.0315 |
| 0.0142 | 15.0 | 7980 | 0.0321 |
| 0.0132 | 16.0 | 8512 | 0.0216 |
| 0.0118 | 17.0 | 9044 | 0.0337 |
| 0.0174 | 18.0 | 9576 | 0.0427 |
| 0.0141 | 19.0 | 10108 | 0.0326 |
| 0.0127 | 20.0 | 10640 | 0.0408 |
| 0.014 | 21.0 | 11172 | 0.0355 |
| 0.0098 | 22.0 | 11704 | 0.0300 |
| 0.0116 | 23.0 | 12236 | 0.0220 |
| 0.012 | 24.0 | 12768 | 0.0345 |
| 0.0135 | 25.0 | 13300 | 0.0351 |
| 0.01 | 26.0 | 13832 | 0.0282 |
| 0.0091 | 27.0 | 14364 | 0.0291 |
| 0.0094 | 28.0 | 14896 | 0.0512 |
| 0.0116 | 29.0 | 15428 | 0.0278 |
| 0.0077 | 30.0 | 15960 | 0.0447 |
| 0.0096 | 31.0 | 16492 | 0.0338 |
| 0.0097 | 32.0 | 17024 | 0.0302 |
| 0.0098 | 33.0 | 17556 | 0.0279 |
| 0.0093 | 34.0 | 18088 | 0.0260 |
| 0.0099 | 35.0 | 18620 | 0.0432 |
| 0.0104 | 36.0 | 19152 | 0.0297 |
| 0.0083 | 37.0 | 19684 | 0.0288 |
| 0.0076 | 38.0 | 20216 | 0.0404 |
| 0.0114 | 39.0 | 20748 | 0.0366 |
| 0.0073 | 40.0 | 21280 | 0.0381 |
| 0.0102 | 41.0 | 21812 | 0.0473 |
| 0.0082 | 42.0 | 22344 | 0.0386 |
| 0.0064 | 43.0 | 22876 | 0.0172 |
| 0.0081 | 44.0 | 23408 | 0.0626 |
| 0.0075 | 45.0 | 23940 | 0.0410 |
| 0.0077 | 46.0 | 24472 | 0.1468 |
| 0.0095 | 47.0 | 25004 | 0.0436 |
| 0.0068 | 48.0 | 25536 | 0.0494 |
| 0.0055 | 49.0 | 26068 | 0.0484 |
| 0.0051 | 50.0 | 26600 | 0.0438 |
| 0.004 | 51.0 | 27132 | 0.0398 |
| 0.0043 | 52.0 | 27664 | 0.0546 |
| 0.005 | 53.0 | 28196 | 0.0509 |
| 0.0033 | 54.0 | 28728 | 0.0510 |
| 0.0054 | 55.0 | 29260 | 0.0554 |
| 0.004 | 56.0 | 29792 | 0.0430 |
| 0.0037 | 57.0 | 30324 | 0.0622 |
| 0.0028 | 58.0 | 30856 | 0.0573 |
| 0.0055 | 59.0 | 31388 | 0.0585 |
| 0.002 | 60.0 | 31920 | 0.0508 |
| 0.005 | 61.0 | 32452 | 0.0648 |
| 0.0031 | 62.0 | 32984 | 0.0541 |
| 0.0039 | 63.0 | 33516 | 0.0567 |
| 0.0018 | 64.0 | 34048 | 0.0627 |
| 0.002 | 65.0 | 34580 | 0.0445 |
| 0.003 | 66.0 | 35112 | 0.0708 |
| 0.0009 | 67.0 | 35644 | 0.0528 |
| 0.0015 | 68.0 | 36176 | 0.0613 |
| 0.0019 | 69.0 | 36708 | 0.0576 |
| 0.0023 | 70.0 | 37240 | 0.0592 |
| 0.0018 | 71.0 | 37772 | 0.0499 |
| 0.0011 | 72.0 | 38304 | 0.0495 |
| 0.0014 | 73.0 | 38836 | 0.0463 |
| 0.0014 | 74.0 | 39368 | 0.0493 |
| 0.0017 | 75.0 | 39900 | 0.0532 |
| 0.0008 | 76.0 | 40432 | 0.0666 |
| 0.0005 | 77.0 | 40964 | 0.0514 |
| 0.002 | 78.0 | 41496 | 0.0702 |
| 0.0026 | 79.0 | 42028 | 0.0426 |
| 0.0001 | 80.0 | 42560 | 0.0481 |
| 0.0019 | 81.0 | 43092 | 0.0551 |
| 0.0001 | 82.0 | 43624 | 0.0550 |
| 0.0 | 83.0 | 44156 | 0.0613 |
| 0.0012 | 84.0 | 44688 | 0.0568 |
| 0.0006 | 85.0 | 45220 | 0.0602 |
| 0.0001 | 86.0 | 45752 | 0.0623 |
| 0.0004 | 87.0 | 46284 | 0.0522 |
| 0.0007 | 88.0 | 46816 | 0.0647 |
| 0.0001 | 89.0 | 47348 | 0.0593 |
| 0.0002 | 90.0 | 47880 | 0.0552 |
| 0.0016 | 91.0 | 48412 | 0.0475 |
| 0.0005 | 92.0 | 48944 | 0.0531 |
| 0.0011 | 93.0 | 49476 | 0.0574 |
| 0.0 | 94.0 | 50008 | 0.0591 |
| 0.0 | 95.0 | 50540 | 0.0606 |
| 0.0005 | 96.0 | 51072 | 0.0599 |
| 0.0018 | 97.0 | 51604 | 0.0505 |
| 0.0 | 98.0 | 52136 | 0.0568 |
| 0.0011 | 99.0 | 52668 | 0.0692 |
| 0.0 | 100.0 | 53200 | 0.0702 |
| 0.0002 | 101.0 | 53732 | 0.0743 |
| 0.0 | 102.0 | 54264 | 0.0822 |
| 0.0007 | 103.0 | 54796 | 0.0905 |
| 0.0001 | 104.0 | 55328 | 0.0822 |
| 0.0005 | 105.0 | 55860 | 0.0792 |
| 0.0004 | 106.0 | 56392 | 0.0683 |
| 0.0018 | 107.0 | 56924 | 0.0526 |
| 0.0029 | 108.0 | 57456 | 0.0600 |
| 0.0005 | 109.0 | 57988 | 0.0631 |
| 0.0 | 110.0 | 58520 | 0.0659 |
| 0.0006 | 111.0 | 59052 | 0.0663 |
| 0.0 | 112.0 | 59584 | 0.0681 |
| 0.0012 | 113.0 | 60116 | 0.0537 |
| 0.0 | 114.0 | 60648 | 0.0558 |
| 0.0 | 115.0 | 61180 | 0.0574 |
| 0.0006 | 116.0 | 61712 | 0.0563 |
| 0.0 | 117.0 | 62244 | 0.0479 |
| 0.0015 | 118.0 | 62776 | 0.0584 |
| 0.0 | 119.0 | 63308 | 0.0606 |
| 0.0 | 120.0 | 63840 | 0.0624 |
| 0.0006 | 121.0 | 64372 | 0.0655 |
| 0.0003 | 122.0 | 64904 | 0.0688 |
| 0.0 | 123.0 | 65436 | 0.0790 |
| 0.0001 | 124.0 | 65968 | 0.0713 |
| 0.0 | 125.0 | 66500 | 0.0721 |
| 0.0006 | 126.0 | 67032 | 0.0689 |
| 0.0 | 127.0 | 67564 | 0.0679 |
| 0.0 | 128.0 | 68096 | 0.0693 |
| 0.0005 | 129.0 | 68628 | 0.0688 |
| 0.0 | 130.0 | 69160 | 0.0696 |
| 0.0 | 131.0 | 69692 | 0.0702 |
| 0.0 | 132.0 | 70224 | 0.0715 |
| 0.0 | 133.0 | 70756 | 0.0727 |
| 0.0 | 134.0 | 71288 | 0.0708 |
| 0.0 | 135.0 | 71820 | 0.0715 |
| 0.0 | 136.0 | 72352 | 0.0724 |
| 0.0 | 137.0 | 72884 | 0.0762 |
| 0.0 | 138.0 | 73416 | 0.0797 |
| 0.0 | 139.0 | 73948 | 0.0800 |
| 0.0 | 140.0 | 74480 | 0.0808 |
| 0.0 | 141.0 | 75012 | 0.0834 |
| 0.0 | 142.0 | 75544 | 0.0833 |
| 0.0014 | 143.0 | 76076 | 0.0782 |
| 0.0 | 144.0 | 76608 | 0.0748 |
| 0.0 | 145.0 | 77140 | 0.0749 |
| 0.0 | 146.0 | 77672 | 0.0751 |
| 0.0 | 147.0 | 78204 | 0.0738 |
| 0.0 | 148.0 | 78736 | 0.0744 |
| 0.0 | 149.0 | 79268 | 0.0744 |
| 0.0 | 150.0 | 79800 | 0.0745 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf | openthaigpt | 2023-09-20T04:29:46Z | 1,615 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"openthaigpt",
"th",
"en",
"dataset:kobkrit/rd-taxqa",
"dataset:iapp_wiki_qa_squad",
"dataset:Thaweewat/alpaca-cleaned-52k-th",
"dataset:Thaweewat/instruction-wild-52k-th",
"dataset:Thaweewat/databricks-dolly-15k-th",
"dataset:Thaweewat/hc3-24k-th",
"dataset:Thaweewat/gpteacher-20k-th",
"dataset:Thaweewat/onet-m6-social",
"dataset:Thaweewat/alpaca-finance-43k-th",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-14T14:08:19Z | ---
license: apache-2.0
datasets:
- kobkrit/rd-taxqa
- iapp_wiki_qa_squad
- Thaweewat/alpaca-cleaned-52k-th
- Thaweewat/instruction-wild-52k-th
- Thaweewat/databricks-dolly-15k-th
- Thaweewat/hc3-24k-th
- Thaweewat/gpteacher-20k-th
- Thaweewat/onet-m6-social
- Thaweewat/alpaca-finance-43k-th
language:
- th
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- openthaigpt
- llama
---
# 🇹🇭 OpenThaiGPT 1.0.0-beta
<img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2Fb8eiMDaqiEQL6ahbAY0h%2Fimage.png?alt=media&token=6fce78fd-2cca-4c0a-9648-bd5518e644ce
https://openthaigpt.aieat.or.th/" width="200px">
🇹🇭 OpenThaiGPT Version 1.0.0-beta is a Thai language 7B-parameter LLaMA v2 Chat model finetuned to follow Thai translated instructions and extend more than 24,500 most popular Thai words vocabularies into LLM's dictionary for turbo speed.
## Upgrade from OpenThaiGPT 1.0.0-alpha
- Add more than 24,500 most popular Thai words vocabularies into LLM's dictionary and re-pretrain embedding layers which make it generate Thai text 10 times faster than previous version.
## Support
- Official website: https://openthaigpt.aieat.or.th
- Facebook page: https://web.facebook.com/groups/openthaigpt
- A Discord server for discussion and support [here](https://discord.gg/rUTp6dfVUF)
- E-mail: [email protected]
## License
**Source Code**: License Apache Software License 2.0.<br>
**Weight**: Research and **Commercial uses**.<br>
## Code and Weight
**Colab Demo**: https://colab.research.google.com/drive/1kDQidCtY9lDpk49i7P3JjLAcJM04lawu?usp=sharing<br>
**Finetune Code**: https://github.com/OpenThaiGPT/openthaigpt-finetune-010beta<br>
**Inference Code**: https://github.com/OpenThaiGPT/openthaigpt<br>
**Weight (Huggingface Checkpoint)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf
## Sponsors
Pantip.com, ThaiSC<br>
<img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2FiWjRxBQgo0HUDcpZKf6A%2Fimage.png?alt=media&token=4fef4517-0b4d-46d6-a5e3-25c30c8137a6" width="100px">
<img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2Ft96uNUI71mAFwkXUtxQt%2Fimage.png?alt=media&token=f8057c0c-5c5f-41ac-bb4b-ad02ee3d4dc2" width="100px">
### Powered by
OpenThaiGPT Volunteers, Artificial Intelligence Entrepreneur Association of Thailand (AIEAT), and Artificial Intelligence Association of Thailand (AIAT)
<img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2F6yWPXxdoW76a4UBsM8lw%2Fimage.png?alt=media&token=1006ee8e-5327-4bc0-b9a9-a02e93b0c032" width="100px">
<img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2FBwsmSovEIhW9AEOlHTFU%2Fimage.png?alt=media&token=5b550289-e9e2-44b3-bb8f-d3057d74f247" width="100px">
### Authors
* Kobkrit Viriyayudhakorn ([email protected])
* Sumeth Yuenyong ([email protected])
* Thaweewat Rugsujarit ([email protected])
* Jillaphat Jaroenkantasima ([email protected])
* Norapat Buppodom ([email protected])
* Koravich Sangkaew ([email protected])
* Peerawat Rojratchadakorn ([email protected])
* Surapon Nonesung ([email protected])
* Chanon Utupon ([email protected])
* Sadhis Wongprayoon ([email protected])
* Nucharee Thongthungwong ([email protected])
* Chawakorn Phiantham ([email protected])
* Patteera Triamamornwooth ([email protected])
* Nattarika Juntarapaoraya ([email protected])
* Kriangkrai Saetan ([email protected])
* Pitikorn Khlaisamniang ([email protected])
<i>Disclaimer: Provided responses are not guaranteed.</i> |
Jiuzhouh/flan-t5-large-lora-expla_graph | Jiuzhouh | 2023-09-20T03:59:46Z | 3 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-20T03:59:41Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
dlby/llm_model_num23awds | dlby | 2023-09-20T03:57:30Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-09-20T03:57:28Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
Undi95/MM-ReMM-L2-20B-GGUF | Undi95 | 2023-09-20T03:57:17Z | 32 | 5 | null | [
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-09-19T16:05:50Z | ---
license: cc-by-nc-4.0
---
Merge:
```shell
layer_slices:
- model: Gryphe/MythoMax-L2-13b
start: 0
end: 16
- model: Undi95/MM-ReMM-L2-20B-Part1
start: 8
end: 20
- model: Gryphe/MythoMax-L2-13b
start: 17
end: 32
- model: Undi95/MM-ReMM-L2-20B-Part1
start: 21
end: 40
```
<!-- description start -->
## Models used
- Gryphe/MythoMax-L2-13b
- Undi95/ReMM-v2.1-L2-13B
<!-- description end -->
Part1 = ReMM v2.1 merged /w MythoMax low weight to keep consistency. I call this "dilution" and result show consistency and coherency without repeat/loop beside the small amount of duplicated datas.
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that completes the request.
### Instruction:
{prompt}
### Response:
``` |
loosmore/ppo-LunarLander-v2 | loosmore | 2023-09-20T03:50:34Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-09-20T03:46:26Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -87.60 +/- 21.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Luciano/lora-llama-7b-hf-lener_br | Luciano | 2023-09-20T03:47:21Z | 8 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-03T12:55:46Z | ---
library_name: peft
base_model: decapoda-research/llama-7b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
Luciano/prefix-tunning-bertimbau-base-lener_br | Luciano | 2023-09-20T03:47:11Z | 6 | 0 | peft | [
"peft",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:adapter:neuralmind/bert-base-portuguese-cased",
"region:us"
] | null | 2023-05-25T10:21:03Z | ---
library_name: peft
base_model: neuralmind/bert-base-portuguese-cased
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
Luciano/lora-bloomz-560m-lener_br | Luciano | 2023-09-20T03:46:51Z | 2 | 0 | peft | [
"peft",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"region:us"
] | null | 2023-05-26T11:21:56Z | ---
library_name: peft
base_model: bigscience/bloomz-560m
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
Luciano/qlora-bloom-560m-lener_br | Luciano | 2023-09-20T03:46:29Z | 2 | 0 | peft | [
"peft",
"base_model:bigscience/bloom-560m",
"base_model:adapter:bigscience/bloom-560m",
"region:us"
] | null | 2023-07-21T20:13:53Z | ---
library_name: peft
base_model: bigscience/bloom-560m
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
Luciano/qlora-bloomz-7b1-mt-lener_br | Luciano | 2023-09-20T03:45:58Z | 1 | 0 | peft | [
"peft",
"base_model:bigscience/bloomz-7b1-mt",
"base_model:adapter:bigscience/bloomz-7b1-mt",
"region:us"
] | null | 2023-07-18T20:24:11Z | ---
library_name: peft
base_model: bigscience/bloomz-7b1-mt
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
Koshti10/BART-base-ET-synthetic | Koshti10 | 2023-09-20T03:39:31Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-09-20T03:39:13Z | ---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
pengold/t5-vietnamese-summarization | pengold | 2023-09-20T03:31:21Z | 743 | 3 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"vi",
"base_model:pengold/t5-vietnamese-summarization",
"base_model:finetune:pengold/t5-vietnamese-summarization",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2023-09-17T17:51:41Z | ---
license: cc-by-4.0
base_model: pengold/t5-vietnamese-summarization
tags:
- summarization
metrics:
- rouge
model-index:
- name: t5-vietnamese-summarization
results: []
inference:
parameters:
min_length: 5
max_length: 150
widget:
- text: >-
summarize: Thảo luận phiên chuyên đề 2 về năng suất lao động tại Diễn đàn
Kinh tế - Xã hội 2023 ngày 19/9, Chuyên gia Kinh tế quốc tế Jonathan Pincus
(Tổ chức phát triển Liên hợp quốc tại Việt Nam) nói việc tăng năng suất lao
động cần nhìn nhận trong quá trình dài hơi thay vì trong giai đoạn ngắn. Rất
khó để một quốc gia có thể tăng trưởng năng suất nhanh trong một giai đoạn
dài, đó chính là bẫy năng suất trung bình, ông Pincus đúc rút, gọi đây là
mối đe dọa lớn.
example_title: Example 1
- text: >-
summarize: Đây là nỗ lực của chính phủ nhằm giảm đi cơn sốt trên
thị trường chứng khoán. Quyết định này có tác động ngay lập tức.
Chỉ số chính của thị trường chứng khoán Thượng Hải khi đóng cửa
giảm 281.8 điểm, ở mức 4053.1. Một số phân tích gia nói việc cổ
phiếu sụt giá cũng chỉ mang tính tạm thời mà thôi. Ngân hàng Thế
giới giờ đây dự đoán nền kinh tế Trung Quốc sẽ tăng 10.4% trong năm
nay. Lúc trước, Ngân hàng Thế giới dự đoán kinh tế Trung Quốc sẽ
tăng 9.6% trong năm 2007. Với việc Bắc Kinh đưa ra hành động nhằm
giảm nhiệt thị trường chứng khoán vào hôm thứ Tư, thuế đối với
cổ phiếu giao dịch giờ đây tăng từ 0.1% lên 0.3%. Tính đến phiên
đóng cửa vào hôm thứ Ba, chỉ số cổ phiếu Thượng Hải đã tăng 62%
trong năm nay, và có giá trị tăng gấp bốn lần kể từ đầu năm 2006.
Ông Thomas Gruener từ Landesbank Berlin nói: “Hành động này có thể
tạo ra việc điều chỉnh giá nhưng nhìn chung chúng tôi không cho là
xu hướng sẽ thay đổi”. Tuy nhiên, việc cổ phiếu Thượng Hải sụt giá
có thể sẽ tác động tới tâm lý của các thị trường chứng khoán
châu Âu. Thế nên các chỉ số chứng khoán tại châu Âu khi mở cửa hôm
thứ Tư đều hạ.
example_title: Example 2
language:
- vi
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-vietnamese-summarization
This model is a fine-tuned version of [pengold/t5-vietnamese-summarization](https://huggingface.co/pengold/t5-vietnamese-summarization) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6288
- Rouge1: 0.4728
- Rouge2: 0.1669
- Rougel: 0.3049
- Rougelsum: 0.3049
- Gen Len: 18.7458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 70
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 5.2487 | 1.0 | 2007 | 5.0028 | 0.4671 | 0.1595 | 0.2994 | 0.2994 | 18.7618 |
| 5.217 | 2.0 | 4014 | 4.9802 | 0.4639 | 0.1569 | 0.2984 | 0.2983 | 18.7747 |
| 5.2191 | 3.0 | 6021 | 4.9685 | 0.4644 | 0.1594 | 0.2989 | 0.2989 | 18.7613 |
| 5.2254 | 4.0 | 8028 | 4.9477 | 0.4648 | 0.1586 | 0.2988 | 0.2987 | 18.7458 |
| 5.1735 | 5.0 | 10035 | 4.9366 | 0.4654 | 0.1593 | 0.2988 | 0.2987 | 18.761 |
| 5.1735 | 6.0 | 12042 | 4.9214 | 0.4676 | 0.1611 | 0.3004 | 0.3004 | 18.78 |
| 5.1653 | 7.0 | 14049 | 4.9095 | 0.4681 | 0.1616 | 0.3007 | 0.3007 | 18.7523 |
| 5.1154 | 8.0 | 16056 | 4.8971 | 0.4664 | 0.1598 | 0.3002 | 0.3001 | 18.7655 |
| 5.1232 | 9.0 | 18063 | 4.8882 | 0.4683 | 0.1612 | 0.3008 | 0.3008 | 18.761 |
| 5.0995 | 10.0 | 20070 | 4.8758 | 0.4709 | 0.1618 | 0.3021 | 0.302 | 18.7518 |
| 5.1012 | 11.0 | 22077 | 4.8689 | 0.4685 | 0.1616 | 0.3011 | 0.3009 | 18.7665 |
| 5.0916 | 12.0 | 24084 | 4.8486 | 0.4695 | 0.1623 | 0.3024 | 0.3023 | 18.7655 |
| 5.0559 | 13.0 | 26091 | 4.8409 | 0.4699 | 0.1631 | 0.3024 | 0.3023 | 18.7849 |
| 5.0633 | 14.0 | 28098 | 4.8326 | 0.4705 | 0.1613 | 0.302 | 0.302 | 18.7583 |
| 5.0335 | 15.0 | 30105 | 4.8243 | 0.4696 | 0.1612 | 0.3023 | 0.3022 | 18.7638 |
| 5.0271 | 16.0 | 32112 | 4.8046 | 0.4691 | 0.1618 | 0.3022 | 0.3022 | 18.7518 |
| 5.0045 | 17.0 | 34119 | 4.8060 | 0.4708 | 0.1629 | 0.3029 | 0.3028 | 18.7568 |
| 5.0072 | 18.0 | 36126 | 4.7945 | 0.4702 | 0.1633 | 0.3024 | 0.3023 | 18.776 |
| 4.9954 | 19.0 | 38133 | 4.7894 | 0.47 | 0.1639 | 0.3022 | 0.3021 | 18.7785 |
| 4.9994 | 20.0 | 40140 | 4.7773 | 0.4692 | 0.1625 | 0.3028 | 0.3027 | 18.7623 |
| 4.953 | 21.0 | 42147 | 4.7641 | 0.4682 | 0.162 | 0.3015 | 0.3014 | 18.757 |
| 4.9526 | 22.0 | 44154 | 4.7600 | 0.4703 | 0.1626 | 0.3023 | 0.3023 | 18.7625 |
| 4.9571 | 23.0 | 46161 | 4.7592 | 0.4698 | 0.1627 | 0.3025 | 0.3025 | 18.781 |
| 4.9324 | 24.0 | 48168 | 4.7511 | 0.4697 | 0.1631 | 0.3022 | 0.3021 | 18.769 |
| 4.9323 | 25.0 | 50175 | 4.7433 | 0.4723 | 0.1649 | 0.304 | 0.3039 | 18.7757 |
| 4.9381 | 26.0 | 52182 | 4.7378 | 0.4703 | 0.1629 | 0.3026 | 0.3026 | 18.7782 |
| 4.9288 | 27.0 | 54189 | 4.7454 | 0.4709 | 0.1627 | 0.3026 | 0.3026 | 18.7777 |
| 4.9131 | 28.0 | 56196 | 4.7222 | 0.471 | 0.1652 | 0.3037 | 0.3037 | 18.782 |
| 4.9005 | 29.0 | 58203 | 4.7241 | 0.4719 | 0.1638 | 0.3039 | 0.3038 | 18.778 |
| 4.9051 | 30.0 | 60210 | 4.7225 | 0.4715 | 0.1647 | 0.3037 | 0.3036 | 18.7668 |
| 4.8816 | 31.0 | 62217 | 4.7181 | 0.4701 | 0.1631 | 0.3029 | 0.3029 | 18.7416 |
| 4.8687 | 32.0 | 64224 | 4.7061 | 0.4705 | 0.1643 | 0.3032 | 0.3031 | 18.7625 |
| 4.8935 | 33.0 | 66231 | 4.7063 | 0.4697 | 0.1632 | 0.3028 | 0.3028 | 18.7458 |
| 4.88 | 34.0 | 68238 | 4.6984 | 0.471 | 0.164 | 0.3039 | 0.3039 | 18.7663 |
| 4.8473 | 35.0 | 70245 | 4.6934 | 0.4699 | 0.1636 | 0.3034 | 0.3033 | 18.7531 |
| 4.8613 | 36.0 | 72252 | 4.6863 | 0.4705 | 0.1631 | 0.303 | 0.303 | 18.7797 |
| 4.8491 | 37.0 | 74259 | 4.6847 | 0.4703 | 0.1638 | 0.3037 | 0.3037 | 18.78 |
| 4.8239 | 38.0 | 76266 | 4.6804 | 0.4707 | 0.1632 | 0.3032 | 0.3032 | 18.7802 |
| 4.8767 | 39.0 | 78273 | 4.6788 | 0.4703 | 0.1637 | 0.3027 | 0.3026 | 18.7446 |
| 4.8402 | 40.0 | 80280 | 4.6700 | 0.4699 | 0.1633 | 0.3028 | 0.3028 | 18.7516 |
| 4.8261 | 41.0 | 82287 | 4.6660 | 0.4699 | 0.1633 | 0.3029 | 0.3028 | 18.7369 |
| 4.8193 | 42.0 | 84294 | 4.6693 | 0.4711 | 0.1654 | 0.3039 | 0.3038 | 18.7421 |
| 4.8161 | 43.0 | 86301 | 4.6636 | 0.4707 | 0.1642 | 0.303 | 0.303 | 18.7595 |
| 4.832 | 44.0 | 88308 | 4.6619 | 0.4708 | 0.1646 | 0.3036 | 0.3035 | 18.7423 |
| 4.8304 | 45.0 | 90315 | 4.6575 | 0.4711 | 0.1651 | 0.3038 | 0.3037 | 18.7354 |
| 4.7958 | 46.0 | 92322 | 4.6543 | 0.4711 | 0.165 | 0.3032 | 0.3032 | 18.7189 |
| 4.804 | 47.0 | 94329 | 4.6541 | 0.4711 | 0.1656 | 0.3037 | 0.3036 | 18.7396 |
| 4.7968 | 48.0 | 96336 | 4.6495 | 0.4709 | 0.165 | 0.3034 | 0.3034 | 18.7411 |
| 4.7912 | 49.0 | 98343 | 4.6471 | 0.4718 | 0.1655 | 0.3041 | 0.3042 | 18.7361 |
| 4.7721 | 50.0 | 100350 | 4.6469 | 0.4723 | 0.1667 | 0.3047 | 0.3047 | 18.7309 |
| 4.7828 | 51.0 | 102357 | 4.6476 | 0.4712 | 0.1656 | 0.3044 | 0.3045 | 18.7446 |
| 4.7934 | 52.0 | 104364 | 4.6453 | 0.4707 | 0.1645 | 0.3035 | 0.3035 | 18.7329 |
| 4.7724 | 53.0 | 106371 | 4.6425 | 0.4715 | 0.1657 | 0.304 | 0.304 | 18.7403 |
| 4.7804 | 54.0 | 108378 | 4.6362 | 0.4711 | 0.1658 | 0.3041 | 0.3041 | 18.7488 |
| 4.792 | 55.0 | 110385 | 4.6363 | 0.4706 | 0.1653 | 0.3038 | 0.3038 | 18.7281 |
| 4.7528 | 56.0 | 112392 | 4.6357 | 0.4724 | 0.1667 | 0.3044 | 0.3044 | 18.7463 |
| 4.7849 | 57.0 | 114399 | 4.6346 | 0.472 | 0.1661 | 0.3041 | 0.304 | 18.7431 |
| 4.7618 | 58.0 | 116406 | 4.6332 | 0.472 | 0.167 | 0.3046 | 0.3046 | 18.7336 |
| 4.7841 | 59.0 | 118413 | 4.6287 | 0.4716 | 0.1664 | 0.3043 | 0.3043 | 18.7369 |
| 4.7764 | 60.0 | 120420 | 4.6316 | 0.473 | 0.1666 | 0.3048 | 0.3047 | 18.7548 |
| 4.7504 | 61.0 | 122427 | 4.6276 | 0.4721 | 0.1671 | 0.3043 | 0.3044 | 18.7371 |
| 4.7629 | 62.0 | 124434 | 4.6250 | 0.4726 | 0.167 | 0.3046 | 0.3046 | 18.76 |
| 4.7764 | 63.0 | 126441 | 4.6264 | 0.4725 | 0.1666 | 0.3044 | 0.3044 | 18.7446 |
| 4.7524 | 64.0 | 128448 | 4.6275 | 0.4719 | 0.166 | 0.3041 | 0.3041 | 18.7428 |
| 4.7641 | 65.0 | 130455 | 4.6288 | 0.4728 | 0.1669 | 0.3049 | 0.3049 | 18.7458 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3 |
emeraldgoose/skt-gpt2-ko-qlora | emeraldgoose | 2023-09-20T03:16:57Z | 4 | 0 | peft | [
"peft",
"base_model:skt/kogpt2-base-v2",
"base_model:adapter:skt/kogpt2-base-v2",
"region:us"
] | null | 2023-08-30T09:39:01Z | ---
library_name: peft
base_model: skt/kogpt2-base-v2
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
emeraldgoose/pg-ko-1.3b-4bit-qlora | emeraldgoose | 2023-09-20T03:16:31Z | 6 | 0 | peft | [
"peft",
"base_model:EleutherAI/polyglot-ko-1.3b",
"base_model:adapter:EleutherAI/polyglot-ko-1.3b",
"region:us"
] | null | 2023-09-03T10:03:22Z | ---
library_name: peft
base_model: EleutherAI/polyglot-ko-1.3b
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
p1atdev/ja-stablelm-base-alpha-7b-kogo-translation-peft | p1atdev | 2023-09-20T02:49:45Z | 6 | 0 | peft | [
"peft",
"ja",
"base_model:stabilityai/japanese-stablelm-base-alpha-7b",
"base_model:adapter:stabilityai/japanese-stablelm-base-alpha-7b",
"license:apache-2.0",
"region:us"
] | null | 2023-08-12T01:57:01Z | ---
language:
- ja
license: apache-2.0
library_name: peft
base_model: stabilityai/japanese-stablelm-base-alpha-7b
---
## Sample

## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0 |
Logeswaransr/sample_finetuned | Logeswaransr | 2023-09-20T02:40:41Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:Logeswaransr/sample_finetuned",
"base_model:finetune:Logeswaransr/sample_finetuned",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-09-19T17:25:34Z | ---
license: apache-2.0
base_model: Logeswaransr/sample_finetuned
tags:
- generated_from_trainer
model-index:
- name: sample_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sample_finetuned
This model is a fine-tuned version of [Logeswaransr/sample_finetuned](https://huggingface.co/Logeswaransr/sample_finetuned) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Sanjay1234/Trillo-Project | Sanjay1234 | 2023-09-20T02:25:43Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-09-06T17:45:58Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# Sanjay1234/Trillo-Project
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Sanjay1234/Trillo-Project")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
yeye776/autotrain-intent-classification-5categories-90278144252 | yeye776 | 2023-09-20T02:11:33Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"ko",
"dataset:yeye776/autotrain-data-intent-classification-5categories",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-09-20T02:10:29Z | ---
tags:
- autotrain
- text-classification
language:
- ko
widget:
- text: "I love AutoTrain"
datasets:
- yeye776/autotrain-data-intent-classification-5categories
co2_eq_emissions:
emissions: 0.006709883159379549
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 90278144252
- CO2 Emissions (in grams): 0.0067
## Validation Metrics
- Loss: 0.025
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/yeye776/autotrain-intent-classification-5categories-90278144252
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("yeye776/autotrain-intent-classification-5categories-90278144252", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("yeye776/autotrain-intent-classification-5categories-90278144252", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
peterxyz/detect-llama-34b-Instruct | peterxyz | 2023-09-20T01:35:41Z | 2 | 0 | peft | [
"peft",
"pytorch",
"llama",
"doi:10.57967/hf/2100",
"region:us"
] | null | 2023-09-06T01:13:53Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
kislayt/lyme-tweet-classification-v0-llama-2-7b | kislayt | 2023-09-20T01:20:26Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"medical",
"lyme disease",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-16T21:38:13Z | ---
license: apache-2.0
language:
- en
tags:
- medical
- lyme disease
--- |
euclaise/falcon_1b_stage3 | euclaise | 2023-09-20T01:11:17Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"falcon",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:euclaise/falcon_1b_stage2",
"base_model:finetune:euclaise/falcon_1b_stage2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-09-18T11:47:17Z | ---
license: apache-2.0
base_model: euclaise/falcon_1b_stage2
tags:
- generated_from_trainer
model-index:
- name: falcon_1b_stage3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_1b_stage3
This model is a fine-tuned version of [euclaise/falcon_1b_stage2](https://huggingface.co/euclaise/falcon_1b_stage2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128.0
- total_train_batch_size: 128.0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.15
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
llm-jp/hf-slow-tokenizer-v21b1 | llm-jp | 2023-09-20T01:01:49Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2023-09-19T04:45:27Z | ---
license: apache-2.0
---
Usage: (you need to specify **use_fast=False, trust_remote_code=True** as the args of *AutoTokenizer.from_pretrained()*)
```Python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"llm-jp/hf-slow-tokenizer-v21b1",
legacy=True,
use_fast=False,
trust_remote_code=True,
)
```
When you create the tokenizer instance from the downloaded model with *trust_remote_code=True*, the model directory must be placed directly under the current directory, and the *pretrained_model_name_or_path* **should not start with "./"** but **should end with "/"**.
```Python
tokenizer = AutoTokenizer.from_pretrained(
"hf-slow-tokenizer-v21b1/",
legacy=True,
use_fast=False,
trust_remote_code=True,
)
```
|
Subsets and Splits