modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-15 06:29:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 426
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-15 06:29:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
StrongMancapsule/StrongMan | StrongMancapsule | "2025-04-12T05:42:53Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-04-12T05:41:32Z" | ---
license: apache-2.0
---
What is Strong Man?
Strong Man Pills is a premium men's health capsule formulated to support enhanced sexual power, stamina, and vitality. With the modern lifestyle taking a toll on male performance, energy, and libido, Strong Man capsule offers a natural solution to help men regain confidence and improve their intimate experiences. Whether you're facing low energy levels, reduced drive, or difficulty maintaining performance, Strong Man Tablets is designed to help you feel like your best self again Strong Man kazi.
Official website:<a href="https://www.nutritionsee.com/strongaenya">www.StrongMan.com</a>
<p><a href="https://www.nutritionsee.com/strongaenya"> <img src="https://www.nutritionsee.com/wp-content/uploads/2025/04/Strong-Man-Kenya.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/strongaenya">Buy now!! Click the link below for more information and get 50% off now... Hurry</a>
Official website:<a href="https://www.nutritionsee.com/strongaenya">www.StrongMan.com</a> |
Viya2023/Viya | Viya2023 | "2025-02-21T15:52:39Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-02-21T15:48:32Z" | ---
license: apache-2.0
---
|
RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf | RichardErkhov | "2024-09-17T02:17:53Z" | 64 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2024-09-16T21:09:26Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
StrangeMerges_51-7B-dare_ties - GGUF
- Model creator: https://huggingface.co/Gille/
- Original model: https://huggingface.co/Gille/StrangeMerges_51-7B-dare_ties/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [StrangeMerges_51-7B-dare_ties.Q2_K.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q2_K.gguf) | Q2_K | 2.53GB |
| [StrangeMerges_51-7B-dare_ties.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [StrangeMerges_51-7B-dare_ties.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [StrangeMerges_51-7B-dare_ties.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [StrangeMerges_51-7B-dare_ties.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [StrangeMerges_51-7B-dare_ties.Q3_K.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q3_K.gguf) | Q3_K | 3.28GB |
| [StrangeMerges_51-7B-dare_ties.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [StrangeMerges_51-7B-dare_ties.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [StrangeMerges_51-7B-dare_ties.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [StrangeMerges_51-7B-dare_ties.Q4_0.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q4_0.gguf) | Q4_0 | 3.83GB |
| [StrangeMerges_51-7B-dare_ties.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [StrangeMerges_51-7B-dare_ties.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [StrangeMerges_51-7B-dare_ties.Q4_K.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q4_K.gguf) | Q4_K | 4.07GB |
| [StrangeMerges_51-7B-dare_ties.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [StrangeMerges_51-7B-dare_ties.Q4_1.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q4_1.gguf) | Q4_1 | 4.24GB |
| [StrangeMerges_51-7B-dare_ties.Q5_0.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q5_0.gguf) | Q5_0 | 4.65GB |
| [StrangeMerges_51-7B-dare_ties.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [StrangeMerges_51-7B-dare_ties.Q5_K.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q5_K.gguf) | Q5_K | 4.78GB |
| [StrangeMerges_51-7B-dare_ties.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [StrangeMerges_51-7B-dare_ties.Q5_1.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q5_1.gguf) | Q5_1 | 5.07GB |
| [StrangeMerges_51-7B-dare_ties.Q6_K.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q6_K.gguf) | Q6_K | 5.53GB |
| [StrangeMerges_51-7B-dare_ties.Q8_0.gguf](https://huggingface.co/RichardErkhov/Gille_-_StrangeMerges_51-7B-dare_ties-gguf/blob/main/StrangeMerges_51-7B-dare_ties.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- WizardLM/WizardMath-7B-V1.1
- Kukedlc/NeuralCoder-7b
- Weyaxi/Einstein-v4-7B
- 0-hero/Matter-0.1-Slim-7B-C-DPO
- Gille/StrangeMerges_42-7B-dare_ties
base_model:
- WizardLM/WizardMath-7B-V1.1
- Kukedlc/NeuralCoder-7b
- Weyaxi/Einstein-v4-7B
- 0-hero/Matter-0.1-Slim-7B-C-DPO
- Gille/StrangeMerges_42-7B-dare_ties
model-index:
- name: StrangeMerges_51-7B-dare_ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_51-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_51-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.54
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_51-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.72
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_51-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_51-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_51-7B-dare_ties
name: Open LLM Leaderboard
---
# StrangeMerges_51-7B-dare_ties
StrangeMerges_51-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
* [Kukedlc/NeuralCoder-7b](https://huggingface.co/Kukedlc/NeuralCoder-7b)
* [Weyaxi/Einstein-v4-7B](https://huggingface.co/Weyaxi/Einstein-v4-7B)
* [0-hero/Matter-0.1-Slim-7B-C-DPO](https://huggingface.co/0-hero/Matter-0.1-Slim-7B-C-DPO)
* [Gille/StrangeMerges_42-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_42-7B-dare_ties)
## 🧩 Configuration
```yaml
models:
- model: Kukedlc/NeuralMaths-Experiment-7b
# No parameters necessary for base model
- model: WizardLM/WizardMath-7B-V1.1
parameters:
density: 0.66
weight: 0.2
- model: Kukedlc/NeuralCoder-7b
parameters:
density: 0.55
weight: 0.2
- model: Weyaxi/Einstein-v4-7B
parameters:
density: 0.55
weight: 0.2
- model: 0-hero/Matter-0.1-Slim-7B-C-DPO
parameters:
density: 0.44
weight: 0.2
- model: Gille/StrangeMerges_42-7B-dare_ties
parameters:
density: 0.66
weight: 0.2
merge_method: dare_ties
base_model: Kukedlc/NeuralMaths-Experiment-7b
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_51-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_51-7B-dare_ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.73|
|AI2 Reasoning Challenge (25-Shot)|66.98|
|HellaSwag (10-Shot) |85.90|
|MMLU (5-Shot) |64.54|
|TruthfulQA (0-shot) |60.72|
|Winogrande (5-shot) |82.08|
|GSM8k (5-shot) |70.13|
|
mrferr3t/0a6c9718-6f14-44aa-93aa-c6dcd0be675d | mrferr3t | "2025-02-05T08:38:13Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-05T08:29:55Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0a6c9718-6f14-44aa-93aa-c6dcd0be675d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: unsloth/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 3d8f4511726216fa_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3d8f4511726216fa_train_data.json
type:
field_instruction: context
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 40
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/0a6c9718-6f14-44aa-93aa-c6dcd0be675d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/3d8f4511726216fa_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 50
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 40
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.02
wandb_entity: null
wandb_mode: online
wandb_name: 09f9f7df-6475-46a0-b536-7003326d0de0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 09f9f7df-6475-46a0-b536-7003326d0de0
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0a6c9718-6f14-44aa-93aa-c6dcd0be675d
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 829
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 2.3562 |
| No log | 0.0075 | 40 | 2.3333 |
| No log | 0.0151 | 80 | 2.1438 |
| 2.2295 | 0.0226 | 120 | 1.9063 |
| 2.2295 | 0.0301 | 160 | 1.8336 |
| 1.8801 | 0.0377 | 200 | 1.7681 |
| 1.8801 | 0.0452 | 240 | 1.7166 |
| 1.8801 | 0.0527 | 280 | 1.6901 |
| 1.6951 | 0.0603 | 320 | 1.6795 |
| 1.6951 | 0.0678 | 360 | 1.7106 |
| 1.7178 | 0.0753 | 400 | 1.6958 |
| 1.7178 | 0.0829 | 440 | 1.6787 |
| 1.7178 | 0.0904 | 480 | 1.6393 |
| 1.632 | 0.0979 | 520 | 1.6385 |
| 1.632 | 0.1055 | 560 | 1.6287 |
| 1.6248 | 0.1130 | 600 | 1.6224 |
| 1.6248 | 0.1206 | 640 | 1.6117 |
| 1.6248 | 0.1281 | 680 | 1.6186 |
| 1.6587 | 0.1356 | 720 | 1.6094 |
| 1.6587 | 0.1432 | 760 | 1.6060 |
| 1.5706 | 0.1507 | 800 | 1.6116 |
| 1.5706 | 0.1582 | 840 | 1.6093 |
| 1.5706 | 0.1658 | 880 | 1.6187 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hgutjh/JJ | hgutjh | "2025-01-15T05:42:58Z" | 149 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"region:us"
] | text-to-image | "2025-01-15T05:42:46Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
(b&w) photo of woman, jessicajones, long black hair, half body, body,
looking at viewer, high detailed skin, skin pores, sfw, leather jacket,
(coastline), overcast weather, wind, waves, 8k uhd, dslr, soft lighting,
high quality, film grain, Fujifilm XT3 <lora:JessicaJonesV2:1>
parameters:
negative_prompt: >-
ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, bad
proportions, extra limbs, cloned face, disfigured, gross proportions,
malformed limbs, missing arms, missing legs, extra arms, extra legs, fused
fingers, too many fingers, long neck
output:
url: images/00312-1715847253.png
base_model: stabilityai/stable-diffusion-3.5-large
instance_prompt: null
---
# JJ
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/hgutjh/JJ/tree/main) them in the Files & versions tab.
|
mradermacher/Qwen-14b-multichoice-v0-GGUF | mradermacher | "2024-09-28T03:21:18Z" | 29 | 0 | transformers | [
"transformers",
"gguf",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"base_model:BorggAgency/Qwen-14b-multichoice-v0",
"base_model:quantized:BorggAgency/Qwen-14b-multichoice-v0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-27T23:13:22Z" | ---
base_model: ha-ilyas10/Qwen-14b-multichoice-v0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ha-ilyas10/Qwen-14b-multichoice-v0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.IQ3_XS.gguf) | IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.IQ3_S.gguf) | IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.IQ3_M.gguf) | IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-14b-multichoice-v0-GGUF/resolve/main/Qwen-14b-multichoice-v0.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/BaeZel_V3-8B-Model_Stock-GGUF | mradermacher | "2025-01-10T03:17:40Z" | 313 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DreadPoor/BaeZel_V3-8B-Model_Stock",
"base_model:quantized:DreadPoor/BaeZel_V3-8B-Model_Stock",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-09T06:57:21Z" | ---
base_model: DreadPoor/BaeZel_V3-8B-Model_Stock
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DreadPoor/BaeZel_V3-8B-Model_Stock
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V3-8B-Model_Stock-GGUF/resolve/main/BaeZel_V3-8B-Model_Stock.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V3-8B-Model_Stock-GGUF/resolve/main/BaeZel_V3-8B-Model_Stock.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V3-8B-Model_Stock-GGUF/resolve/main/BaeZel_V3-8B-Model_Stock.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V3-8B-Model_Stock-GGUF/resolve/main/BaeZel_V3-8B-Model_Stock.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V3-8B-Model_Stock-GGUF/resolve/main/BaeZel_V3-8B-Model_Stock.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V3-8B-Model_Stock-GGUF/resolve/main/BaeZel_V3-8B-Model_Stock.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V3-8B-Model_Stock-GGUF/resolve/main/BaeZel_V3-8B-Model_Stock.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V3-8B-Model_Stock-GGUF/resolve/main/BaeZel_V3-8B-Model_Stock.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V3-8B-Model_Stock-GGUF/resolve/main/BaeZel_V3-8B-Model_Stock.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V3-8B-Model_Stock-GGUF/resolve/main/BaeZel_V3-8B-Model_Stock.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V3-8B-Model_Stock-GGUF/resolve/main/BaeZel_V3-8B-Model_Stock.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BaeZel_V3-8B-Model_Stock-GGUF/resolve/main/BaeZel_V3-8B-Model_Stock.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/haLLAwa3-GGUF | mradermacher | "2024-12-17T17:32:48Z" | 12 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"openchat/openchat-3.5-0106",
"machinists/Mistral-7B-SQL",
"en",
"base_model:AbacusResearch/haLLAwa3",
"base_model:quantized:AbacusResearch/haLLAwa3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-17T14:35:45Z" | ---
base_model: AbacusResearch/haLLAwa3
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- openchat/openchat-3.5-0106
- machinists/Mistral-7B-SQL
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AbacusResearch/haLLAwa3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/haLLAwa3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/haLLAwa3-GGUF/resolve/main/haLLAwa3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/haLLAwa3-GGUF/resolve/main/haLLAwa3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/haLLAwa3-GGUF/resolve/main/haLLAwa3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/haLLAwa3-GGUF/resolve/main/haLLAwa3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/haLLAwa3-GGUF/resolve/main/haLLAwa3.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/haLLAwa3-GGUF/resolve/main/haLLAwa3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/haLLAwa3-GGUF/resolve/main/haLLAwa3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/haLLAwa3-GGUF/resolve/main/haLLAwa3.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/haLLAwa3-GGUF/resolve/main/haLLAwa3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/haLLAwa3-GGUF/resolve/main/haLLAwa3.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/haLLAwa3-GGUF/resolve/main/haLLAwa3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/haLLAwa3-GGUF/resolve/main/haLLAwa3.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
imsanjoykb/mistral-7b-dolly | imsanjoykb | "2024-02-09T15:47:24Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-06T17:29:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/Skywork-13B-Airo-Claude-Pippa-Puffin-3.0bpw-h6-exl2 | LoneStriker | "2023-11-01T04:28:02Z" | 16 | 0 | transformers | [
"transformers",
"pytorch",
"skywork",
"text-generation",
"custom_code",
"dataset:jondurbin/airoboros-3.1",
"dataset:Norquinal/claude_multiround_chat_30k",
"dataset:PygmalionAI/PIPPA",
"dataset:LDJnr/Puffin",
"arxiv:2310.19341",
"arxiv:2310.16713",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | "2023-11-01T04:27:42Z" | ---
license: other
license_name: license
license_link: >-
https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf
datasets:
- jondurbin/airoboros-3.1
- Norquinal/claude_multiround_chat_30k
- PygmalionAI/PIPPA
- LDJnr/Puffin
---
<!-- <div align="center">
<h1>
✨Skywork
</h1>
</div> -->
<div align="center"><img src="misc/skywork_logo.jpeg" width="550"/></div>
<p align="center">
👨💻 <a href="https://github.com/SkyworkAI/Skywork" target="_blank">Github</a> • 🤗 <a href="https://huggingface.co/Skywork" target="_blank">Hugging Face</a>• 🤖 <a href="https://modelscope.cn/organization/Skywork" target="_blank">ModelScope</a> • 💬 <a href="https://github.com/SkyworkAI/Skywork/blob/main/misc/wechat.png?raw=true" target="_blank">WeChat</a>• 📜<a href="http://arxiv.org/abs/2310.19341" target="_blank">Tech Report</a>
</p>
<div align="center">
[🎉天工在线对话平台已正式向公众开放](https://sso.tiangong.cn/?redirect=https://model-platform.tiangong.cn/overview&client_id=200005)
</div>
<div align="center">
[](https://github.com/SkyworkAI/Skywork/stargazers)
[](https://github.com/SkyworkAI/Skywork/fork)
</div>
# 模型介绍(Introduction)
**Skywork-13B-Base**模型在高质量清洗过滤的3.2万亿个多语言(主要是中文和英文)和代码数据上进行预训练,它在多种评测和各种基准测试上都展现了同等规模模型的最佳效果。
**Skywork-13B-Base**: The model was trained on a high-quality cleaned dataset consisting of 3.2 trillion multilingual data (mainly Chinese and English) and code. It has demonstrated the best performance among models of similar scale in various evaluations and benchmark tests.
如果您希望了解更多的信息,如训练方案,评估方法,请参考我们的[技术报告](http://arxiv.org/abs/2310.19341),[Skymath](https://arxiv.org/abs/2310.16713)论文,[SkyworkMM](https://github.com/will-singularity/Skywork-MM/blob/main/skywork_mm.pdf)论文。
If you are interested in more training and evaluation details, please refer to our [technical report](http://arxiv.org/abs/2310.19341), [Skymath]((https://arxiv.org/skywork-tech-report)) paper and [SkyworkMM](https://github.com/will-singularity/Skywork-MM/blob/main/skywork_mm.pdf) paper.
## 训练数据(Training Data)
我们精心搭建了数据清洗流程对文本中的低质量数据、有害信息、敏感信息进行清洗过滤。我们的Skywork-13B-Base模型是在清洗后的3.2TB高质量中、英、代码数据上进行训练,其中英文占比52.2%,中文占比39.6%,代码占比8%,在兼顾中文和英文上的表现的同时,代码能力也能有保证。
We have developed a data cleaning pipeline with great care to effectively clean and filter low-quality data and eliminate harmful information from text data. Our Skywork-13B-Base model is trained on a dataset with 3.2TB tokens that consists of high-quality Chinese, English, and code data, all of which have been thoroughly cleaned. The English data comprises 52.2% of the dataset, the Chinese data accounts for 39.6%, and the code data makes up 8%. This comprehensive approach ensures optimal performance for both Chinese and English while also maintaining the ability to handle code.
| | Category | Percentage |
|-------------|------------------|------------|
| **English** | Webpages | 39.8% |
| | Books | 3.6% |
| | Academic Papers | 3.0% |
| | Encyclopedia | 0.5% |
| | Miscellany | 2.9% |
| **Chinese** | Webpages | 30.4% |
| | Social Media | 5.5% |
| | Encyclopedia | 0.8% |
| | Miscellany | 3.1% |
| **Other Lang.** | Encyclopedia | 2.4% |
| **Code** | Github | 8.0% |
## 模型结构(Model Structure)
与Llama-2-13B模型对比,天工Skywork-13B模型采用相对更加瘦长的网络结构,层数为52层,同时将FFN Dim和Hidden Dim缩小到12288和4608,从而保证模型参数量和原始Llama-2-13B模型相当。根据我们前期实验对比,相对瘦长的网络结构在大Batch Size训练下可以取得更好的泛化效果。Skywork-13B和Llama-2-13B模型的对比如下:
Compared to the Llama2-13B model, the Skywork-13B model adopts a relatively thinner and deeper network structure with 52 layers. At the same time, the FFN Dim and Hidden Dim are reduced to 12288 and 4608, respectively, to ensure that the model has a similar number of parameters as the original Llama-13B model. Based on our preliminary experimental results, a relatively thinner and deeper network structure can achieve better generalization performance under large batch size training. The detailed comparison between the Skywork-13B and Llama-2-13B models is as follows:
| Model Structure | Llama2-13B | Skywork-13B |
|----------------------|:----:|:-----------:|
| Vocab. Size | 32,000 | 65,536 |
| Hidden Dim. | 5,120 | 4,608 |
| FFN Dim. | 13,696 | 12,288 |
| Head Dim. | 128 | 128 |
| Num. Heads | 40 | 36 |
| Num. Layers | 40 | 52 |
| Seq. Len. | 4,096 | 4,096 |
| Positional Embedding | RoPE | RoPE |
## 分词器(Tokenizer)
我们使用Byte-Pair Encoding(BPE)对数据进行分词,词表大小为65536,其中拉丁字符和子词为32000个,汉字和Unicode符号8000个,汉语词语25519个,剩下的17个为保留字。
We use Byte-Pair Encoding (BPE) to tokenize the data, with a vocabulary size of 65536. Among them, there are 32000 Latin characters and subwords, 8000 Chinese characters and Unicode symbols, 25519 Chinese words, and the remaining 17 are reserved words.
| Category | Size |
|---------------------------------|--------|
| Latin based words & subwords | 32000 |
| Chinese characters & Unicode symbols | 8000 |
| Chinese words | 25519 |
| Reserved symbols | 17 |
| **Total** | **65536** |
# 模型评估(Evaluation)
## 领域数据困惑度评估(Perplexity Evaluaiton)
语言模型训练的本质上是让预测下一个词更准确。基于这个认知,我们认为评估基础大模型一个重要的方式是评估在各大领域上语言模型生成文章的概率。在模型训练中预测下一个词的概率一般使用Cross Entropy损失函数,整体的损失函数为每个位置预测真实词损失的平均,则有:
$$loss = \sum^{n}_{i=1} log(p_i) / n = log( \prod_{i=1}^n p_i) / n$$
其中$n$是文档的长度,即token数,$p_i$是位置i上真实词的概率,我们知道文档中每一个位置上真实词的概率的联乘则为生成该文档的概率,如此我们就将loss和生成文章的概率联系在了一起。而不同模型因为使用的分词器不同,具有不同的token数,因此对损失函数乘以token数目$n$,这样就仅考虑生成文章的概率部分,不同模型也可以进行比较。我们将标准化后loss取指数转换成perplexity,使得模型的差异更加可读。为了阅读方便后续提到的loss和ppl为模型标准化后的loss和perplexity。
基于上述分析,我们对对多个领域筛选出2023年9月份新发布的几百到上千篇高质量文章,并人工进行了核对。保证所有的测试数据不在天工模型以及其他所有模型的训练集中,并且测试数据的来源也足够广泛,质量也高。我们可以选取当前最新的文章评测不同模型的ppl,模型很难作弊。
下图列出了不同开源模型,天工Skywork-13B-Base取得最优效果,证明了我们的Base模型的基础能力处于国内开源模型中文最强水平。
We have chosen several hundred to thousands of high-quality articles that were published after September 1, 2023 across various fields. We have manually verified these articles to ensure their quality. It is important to note that none of the test data used in evaluating the Skywork model or any other models is included in their training set. Furthermore, the test data is diverse and of high quality, making it challenging for the models to gain an unfair advantage.
The figure below displays the performance of different open source models. Skywork-13B-Base achieves the best results.
| | Tech | Movie | Gov. | Game | Finance | General | Average |
|------------------|-------|-------|-------|-------|---------|---------|---------|
| MOSS-7B | 20.83 | 39.66 | 11.08 | 31.24 | 10.59 | 13.25 | 18.50 |
| InternLM-7B | 13.43 | 24.90 | 5.88 | 19.78 | 6.17 | 8.10 | 11.17 |
| Qwen-7B | 13.39 | 25.16 | 5.55 | 19.26 | 5.76 | 7.78 | 10.83 |
| Baichuan2-7B | 12.89 | 23.26 | 5.34 | 18.36 | 5.68 | 7.62 | 10.41 |
| LLaMA2-13B | 23.26 | 50.66 | 18.09 | 32.52 | 14.85 | 16.55 | 23.54 |
| Xverse-13B | 12.55 | 23.49 | 5.20 | 17.69 | 5.54 | 7.46 | 10.19 |
| Baichuan-13B | 12.38 | 22.46 | 5.21 | 17.59 | 5.42 | 7.37 | 10.03 |
| Baichuan2-13B | 12.14 | 21.85 | 5.05 | 17.15 | 5.35 | 7.24 | 9.81 |
| Qwen-14B | 11.90 | 22.43 | 4.89 | **16.94** | 5.24 | 7.03 | 9.67 |
| InternLM-20B | 12.34 | 22.06 | 5.75 | 17.45 | 5.73 | 7.78 | 10.34 |
| Aquila2-34B | 14.62 | 29.09 | 5.72 | 21.78 | 5.83 | 8.45 | 11.73 |
| Skywork-13B-Base | **11.58** | **21.84** | **4.76** | 17.28 | **4.92** | **6.82** | **9.42** |
### 评测数据和评测脚本(Loss Evaluation)
我们将评测数据和评测脚本也进行了开源,下载github上的代码运行下面命令则可以复现我们的结果。
We have also open-sourced the data and evaluation scripts. You can reproduce our results by running the following command.
```
bash bash_scripts/skywork_eval_loss.sh
```
## Benchmark评估(Benchmark Results)
我们评估了各大权威评测benchmark上的结果作为参考,包括C-Eval,MMLU,CMMLU,GSM8K。遵循之前的评估流程,C-Eval、MMLU、CMMLU测试5-shot结果,GSM8K测试8-shot结果。可以看到Skywork-13B-Base模型在中文开源模型中处于前列,在同等参数规模下为最优水平。
We evaluated Skywork-13B-Base on several popular benchmarks, including C-Eval, MMLU, CMMLU, and GSM8K. Following the previous evaluation process, we tested the 5-shot results of C-Eval, MMLU, and CMMLU, and the 8-shot results of GSM8K. It can be seen that the Skywork-13B-Base model is among the top models in the Chinese open source model community, performing at an optimal level with the same parameter scale.
| Model | C-Eval | CMMLU | MMLU | GSM8K |
|-------------------------|:-----:|:---------------:|:----------:|:-------:|
| LLaMA-1-13B-Base | 35.5 | 31.2 | 46.9 | 17.8 |
| Open-LLaMA-13B | 27.1 | 26.7 | 42.7 | 12.4 |
| LLaMA-2-13B-Base | 36.5 | 36.6 | 54.8 | 28.7 |
| InternLM-20B | 58.8 | - | 62.0 | 52.6 |
| Qwen-14B-Base | 72.1 | 71.0 | 66.3 | 61.3 |
| Aquila2-34B-Base | 63.1 | 71.4 | 64.2 | 58.4 |
| XVERSE-13B-Base | 54.7 | - | 55.1 | - |
| Baichuan-13B-Base | 52.4 | 55.3 | 51.6 | 26.6 |
| Baichuan-2-13B-Base | 58.1 | 62.0 | 59.2 | 52.3 |
| Skywork-13B-Base (ours) | 60.6 | 61.8 | 62.1 | 55.8 |
## Benchmark评估详细结果
我们给出**Skywork-13B-Base**模型在C-Eval,CMMLU,MMLU上模型的详细结果。
We provide detailed results of the Skywork-13B-Base model on C-EVAL, CMMLU, and MMLU.
| Benchmark | **STEM** | **Humanities** | **Social Science** | **Other** | **China Specific** | **Hard** | **Average** |
|:-----:|:---------:|:--------:|:-------------:|:--------:|:--------:|:--------:|:--------:|
| **C-EVAL** | 51.2 | 67.8 | 74.6 | 57.5 | - | 39.4 | 60.6 |
| **CMMLU** | 49.5 | 69.3 | 65.9 | 63.3 | 64.2 | - | 61.8 |
| **MMLU** | 51.6 | 58.0 | 72.5 | 68.8 | - | - | 62.1 |
# 快速开始(Quickstart)
我们将模型参数、配置文件、tokenizer等在huggingface和modelscope上进行了开源。
We have open-sourced the model parameters, configuration files, tokenizer, and more on Huggingface and Modelscope.
## 依赖安装(Requirements)
- Python 3.8及以上版本
- Pytorch 2.0及以上版本
- CUDA建议使用11.4以上版本。
Skywork-13B-Base模型,Skywork-13B-Chat模型和Skywork-13B-Math模型运行下面的脚本进行Python依赖安装。
- Python 3.8 and above
- Pytorch 2.0 and above
- CUDA 11.4 and above are recommended.
Skywork-13B-Base model, Skywork-13B-Chat model, and Skywork-13B-Math model run the following script for Python dependency installation:
```shell
pip install -r requirements.txt
```
## Huggingface模型测试(Demonstration)
### Base 模型推理(Base Model Inference)
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> from transformers.generation import GenerationConfig
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("SkyworkAI/Skywork-13B-Base", trust_remote_code=True)
>>> model = AutoModelForCausalLM.from_pretrained("SkyworkAI/Skywork-13B-Base", device_map="auto", trust_remote_code=True).eval()
>>> inputs = tokenizer('陕西的省会是西安', return_tensors='pt').to(model.device)
>>> response = model.generate(inputs.input_ids, max_length=128)
>>> print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))
陕西的省会是西安,西安是我国著名的古都,在历史上有十三个朝代在此建都,所以西安又被称为“十三朝古都”。西安是我国著名的旅游城市,每年都有大量的游客来到西安旅游,西安的旅游资源非常丰富,有很多著名的旅游景点,比如秦始皇兵马俑、大雁塔、华清池、大唐芙蓉园、西安城墙、大明宫国家遗址公园、西安碑林博物馆、西安钟楼、西安鼓楼、西安半坡博物馆、西安大兴善寺、西安小雁塔
>>> inputs = tokenizer('陕西的省会是西安,甘肃的省会是兰州,河南的省会是郑州', return_tensors='pt').to(model.device)
>>> response = model.generate(inputs.input_ids, max_length=128)
>>> print(tokenizer.decode(response.cpu()[0], skip_special_tokens=True))
陕西的省会是西安,甘肃的省会是兰州,河南的省会是郑州,湖北的省会是武汉,湖南的省会是长沙,江西的省会是南昌,安徽的省会是合肥,江苏的省会是南京,浙江的省会是杭州,福建的省会是福州,广东的省会是广州,广西的省会是南宁,海南的省会是海口,四川的省会是成都,贵州的省会是贵阳,云南的省会是昆明,西藏的省会是拉萨,青海的省会是西宁,宁夏的省会是银川,新疆的省会是乌鲁木齐。
```
# 模型微调(Fine-tuning)
## 全量微调(Full-parameter Fine-tuning)
使用Skywork-13B-Base模型进行预训练微调
```bash
## preprocess continue pretraining data
## Because pre-training data is usually large, we use a script to process the training data separately.
python train/pt_data_preprocess.py \
-t $MODEL_PATH \
-i data/pt_train.jsonl \
-o data_cache/pt_train_demo
## launch training
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export MODEL_PATH=skywork-13b-models/skywork-13b-base
export DATA_CACHE_DIR=data_cache/pt_train_demo/pt_train
bash bash_scripts/skywork_13b_pt.sh
```
使用Skywork-13B-Base模型进行有监督微调(SFT, Supevise Fine-tuning)
```bash
## preprocess data and launch training
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export SFT_DATA_DIR=data/sft_data
export DATA_CACHE_DIR=data_cache/sft_train_demo
bash bash_scripts/skywork_13b_sft.sh
```
## LoRA微调(PEFT)
使用Skywork-13B-Base模型以及LoRA进行预训练微调
```bash
## preprocess continue pretraining data
## Because pre-training data is usually large, we use a script to process the training data separately.
python train/pt_data_preprocess.py \
-t $MODEL_PATH \
-i data/pt_train.jsonl \
-o data_cache/pt_train_demo
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export MODEL_PATH=skywork-13b-models/skywork-13b-base
export DATA_CACHE_DIR=data_cache/pt_train_demo/pt_train
bash bash_scripts/skywork_13b_pt_lora.sh
```
使用Skywork-13B-Base模型以及LoRA进行有监督微调(SFT, Supevise Fine-tuning)
```bash
export WANDB_API_KEY=YOUR_WANDB_KEY
export WANDB_ENTITY=skywork
export WANDB_PROJECT=skywork-13b-opensource
export SFT_DATA_DIR=data/sft_data
export DATA_CACHE_DIR=data_cache/sft_train_demo
bash bash_scripts/skywork_13b_sft_lora.sh
```
# 声明和协议(Declaration and License Agreement)
## 声明(Declaration)
我们在此声明,不要利用Skywork模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Skywork 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用skywork开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that the Skywork model should not be used for any activities that pose a threat to national or societal security or engage in unlawful actions. Additionally, we request users not to deploy the Skywork model for internet services without appropriate security reviews and records. We hope that all users will adhere to this principle to ensure that technological advancements occur in a regulated and lawful environment.
We have done our utmost to ensure the compliance of the data used during the model's training process. However, despite our extensive efforts, due to the complexity of the model and data, there may still be unpredictable risks and issues. Therefore, if any problems arise as a result of using the Skywork open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, abused, disseminated, or improperly utilized, we will not assume any responsibility.
## 协议(License Agreement)
社区使用Skywork模型需要遵循[《Skywork 模型社区许可协议》](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20模型社区许可协议.pdf)。Skywork模型支持商业用途,如果您计划将Skywork模型或其衍生品用于商业目的,无需再次申请, 但请您仔细阅读[《Skywork 模型社区许可协议》](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20模型社区许可协议.pdf)并严格遵守相关条款。
The community usage of Skywork model requires [Skywork Community License](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf). The Skywork model supports commercial use. If you plan to use the Skywork model or its derivatives for commercial purposes, you must abide by terms and conditions within [Skywork Community License](https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20Community%20License.pdf).
[《Skywork 模型社区许可协议》》]:https://github.com/SkyworkAI/Skywork/blob/main/Skywork%20模型社区许可协议.pdf
[[email protected]]: mailto:[email protected]
# 引用和联系我们(Contact Us and Citation)
如果您觉得我们的工作对您有帮助,欢迎引用我们的论文~
If you find our work helpful, please feel free to cite our paper~
```
@misc{wei2023skywork,
title={Skywork: A More Open Bilingual Foundation Model},
author={Tianwen Wei and Liang Zhao and Lichang Zhang and Bo Zhu and Lijie Wang and Haihua Yang and Biye Li and Cheng Cheng and Weiwei Lü and Rui Hu and Chenxia Li and Liu Yang and Xilin Luo and Xuejie Wu and Lunan Liu and Wenjun Cheng and Peng Cheng and Jianhao Zhang and Xiaoyu Zhang and Lei Lin and Xiaokun Wang and Yutuan Ma and Chuanhai Dong and Yanqi Sun and Yifu Chen and Yongyi Peng and Xiaojuan Liang and Shuicheng Yan and Han Fang and Yahui Zhou},
year={2023},
eprint={2310.19341},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{skyworkmath,
title={SkyMath: Technical Report},
author={Liu Yang, Haihua Yang, Wenjun Cheng, Lei Lin, Chenxia Li, Yifu Chen, Lunan Liu, Jianfei Pan, Tianwen Wei, Biye Li, Liang Zhao, Lijie Wang, Bo Zhu, Guoliang Li, Xuejie Wu, Xilin Luo, Rui Hu},
journal={arXiv preprint arXiv: 2310.16713},
url={https://arxiv.org/abs/2310.16713},
year={2023}
}
```
```
@article{Skywork_Multi-Modal_Group_Empirical_Study_Towards_2023,
author = {Skywork Multi-Modal Group},
month = sep,
title = {{Empirical Study Towards Building An Effective Multi-Modal Large Language Model}},
year = {2023}
}
```
|
wheattoast11/gemma-2-reasoner-9b | wheattoast11 | "2025-01-09T20:52:24Z" | 97 | 1 | transformers | [
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-01-09T20:46:35Z" | ---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** wheattoast11
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
baek26/wiki_asp-animal_4639_wiki_asp-animal_2910_bart-base | baek26 | "2024-03-26T06:54:59Z" | 103 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:baek26/wiki_asp-animal_2910_bart-base",
"base_model:finetune:baek26/wiki_asp-animal_2910_bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-03-26T04:48:29Z" | ---
license: apache-2.0
base_model: baek26/wiki_asp-animal_2910_bart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: wiki_asp-animal_4639_wiki_asp-animal_2910_bart-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wiki_asp-animal_4639_wiki_asp-animal_2910_bart-base
This model is a fine-tuned version of [baek26/wiki_asp-animal_2910_bart-base](https://huggingface.co/baek26/wiki_asp-animal_2910_bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5290
- Rouge1: 0.1446
- Rouge2: 0.0631
- Rougel: 0.1276
- Rougelsum: 0.1277
- Gen Len: 15.8808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.93 | 500 | 2.5627 | 0.1435 | 0.0614 | 0.1262 | 0.1261 | 15.9017 |
| No log | 3.87 | 1000 | 2.5538 | 0.1373 | 0.0583 | 0.121 | 0.1212 | 15.4998 |
| No log | 5.8 | 1500 | 2.5393 | 0.1465 | 0.0638 | 0.1295 | 0.1297 | 16.0893 |
| 2.3118 | 7.74 | 2000 | 2.5336 | 0.1517 | 0.0666 | 0.1336 | 0.1339 | 16.2204 |
| 2.3118 | 9.67 | 2500 | 2.5290 | 0.1446 | 0.0631 | 0.1276 | 0.1277 | 15.8808 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mlx-community/encodec-24khz-bfloat16 | mlx-community | "2024-09-18T12:53:49Z" | 6 | 0 | mlx | [
"mlx",
"safetensors",
"encodec",
"en",
"license:other",
"region:us"
] | null | "2024-09-18T12:53:45Z" | ---
language: en
license: other
tags:
- mlx
library: mlx
---
The Model [mlx-community/encodec-24khz-bfloat16](https://huggingface.co/mlx-community/encodec-24khz-bfloat16) was
converted to MLX format from
[facebook/encodec_24khz](https://huggingface.co/facebook/encodec_24khz).
This model is intended to be used with the [EnCodec MLX
example](https://github.com/ml-explore/mlx-examples/tree/main/encodec).
|
justmdvk/hello | justmdvk | "2025-04-05T06:29:12Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-05T06:29:12Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
nadejdatarabukina/24157ed6-3b4e-4824-a958-f639f9c80669 | nadejdatarabukina | "2025-01-21T23:36:37Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-14B-Chat",
"base_model:adapter:Qwen/Qwen1.5-14B-Chat",
"license:other",
"region:us"
] | null | "2025-01-21T23:29:31Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-14B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 24157ed6-3b4e-4824-a958-f639f9c80669
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-14B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 12cf1f3e6ad1e2ad_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/12cf1f3e6ad1e2ad_train_data.json
type:
field_input: negative
field_instruction: feature_clean
field_output: positive
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: nadejdatarabukina/24157ed6-3b4e-4824-a958-f639f9c80669
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/12cf1f3e6ad1e2ad_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d6370a70-8bde-4098-b940-1745b7f97dcb
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d6370a70-8bde-4098-b940-1745b7f97dcb
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 24157ed6-3b4e-4824-a958-f639f9c80669
This model is a fine-tuned version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0027 | 1 | 3.7334 |
| 3.5861 | 0.0136 | 5 | 3.6200 |
| 3.3192 | 0.0272 | 10 | 3.0217 |
| 2.7936 | 0.0407 | 15 | 2.6128 |
| 2.3458 | 0.0543 | 20 | 2.4026 |
| 2.2535 | 0.0679 | 25 | 2.3236 |
| 2.4221 | 0.0815 | 30 | 2.3055 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
cortecs/QwQ-32B-FP8-Dynamic | cortecs | "2025-03-07T15:41:17Z" | 11 | 1 | null | [
"safetensors",
"qwen2",
"base_model:Qwen/QwQ-32B",
"base_model:quantized:Qwen/QwQ-32B",
"compressed-tensors",
"region:us"
] | null | "2025-03-06T16:45:28Z" | ---
base_model: Qwen/QwQ-32B
---
This is a quantization of the [QwQ-32B](https://huggingface.co/Qwen/QwQ-32B).
The QwQ-32B model stands out as a medium-sized reasoning powerhouse within the Qwen series, notably excelling in tasks that require advanced thinking and problem-solving capabilities. This model, with 32.5 billion parameters, is particularly adept at handling complex reasoning tasks and outperforms traditional instruction-tuned models by a significant margin. Its architecture, enriched with transformers incorporating RoPE, SwiGLU, and RMSNorm technologies, allows it to adeptly manage extensive sequences, reaching up to 131,072 tokens. Designed for enhanced reasoning abilities, the QwQ-32B model is optimized for use in challenging downstream tasks, such as complex mathematical problems and standardized multiple-choice questions, making it a valuable asset in environments where sophisticated cognitive processing is required.
## Evaluations
This model provides an accuracy recovery of 100.0%.
| __English__ | __[QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)__ | __[QwQ-32B-FP8-Dynamic (this)](https://huggingface.co/cortecs/QwQ-32B-FP8-Dynamic)__ |
|:--------------|-----------------------------------------------------:|---------------------------------------------------------------------------------------:|
| Avg. | 74.05 | 74.05 |
| ARC | 72.7 | 72.8 |
| Hellaswag | 75.4 | 75.3 |
We did not check for data contamination.
Evaluation was done using [Eval. Harness](https://github.com/EleutherAI/lm-evaluation-harness) with `limit=1000`.
## Usage
Install **vLLM** and
run the [server](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#openai-compatible-server):
```
python -m vllm.entrypoints.openai.api_server --model cortecs/QwQ-32B-FP8-Dynamic --max-model-len 131072 --gpu-memory-utilization 0.95
```
Access the model:
```
curl http://localhost:8000/v1/completions -H "Content-Type: application/json" -d ' {
"model": "cortecs/QwQ-32B-FP8-Dynamic",
"prompt": "San Francisco is a"
} '
```
|
mradermacher/RealGuardrails-Llama3.1-8B-Instruct-SFT-GGUF | mradermacher | "2025-02-18T03:18:28Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:normster/RealGuardrails",
"base_model:normster/RealGuardrails-Llama3.1-8B-Instruct-SFT",
"base_model:quantized:normster/RealGuardrails-Llama3.1-8B-Instruct-SFT",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-18T02:54:21Z" | ---
base_model: normster/RealGuardrails-Llama3.1-8B-Instruct-SFT
datasets:
- normster/RealGuardrails
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/normster/RealGuardrails-Llama3.1-8B-Instruct-SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RealGuardrails-Llama3.1-8B-Instruct-SFT-GGUF/resolve/main/RealGuardrails-Llama3.1-8B-Instruct-SFT.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/RealGuardrails-Llama3.1-8B-Instruct-SFT-GGUF/resolve/main/RealGuardrails-Llama3.1-8B-Instruct-SFT.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/RealGuardrails-Llama3.1-8B-Instruct-SFT-GGUF/resolve/main/RealGuardrails-Llama3.1-8B-Instruct-SFT.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RealGuardrails-Llama3.1-8B-Instruct-SFT-GGUF/resolve/main/RealGuardrails-Llama3.1-8B-Instruct-SFT.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/RealGuardrails-Llama3.1-8B-Instruct-SFT-GGUF/resolve/main/RealGuardrails-Llama3.1-8B-Instruct-SFT.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/RealGuardrails-Llama3.1-8B-Instruct-SFT-GGUF/resolve/main/RealGuardrails-Llama3.1-8B-Instruct-SFT.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RealGuardrails-Llama3.1-8B-Instruct-SFT-GGUF/resolve/main/RealGuardrails-Llama3.1-8B-Instruct-SFT.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RealGuardrails-Llama3.1-8B-Instruct-SFT-GGUF/resolve/main/RealGuardrails-Llama3.1-8B-Instruct-SFT.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/RealGuardrails-Llama3.1-8B-Instruct-SFT-GGUF/resolve/main/RealGuardrails-Llama3.1-8B-Instruct-SFT.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/RealGuardrails-Llama3.1-8B-Instruct-SFT-GGUF/resolve/main/RealGuardrails-Llama3.1-8B-Instruct-SFT.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RealGuardrails-Llama3.1-8B-Instruct-SFT-GGUF/resolve/main/RealGuardrails-Llama3.1-8B-Instruct-SFT.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/RealGuardrails-Llama3.1-8B-Instruct-SFT-GGUF/resolve/main/RealGuardrails-Llama3.1-8B-Instruct-SFT.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rkdaniels/llama-3-2-1b-trump-test-1-epochs | rkdaniels | "2025-03-16T17:07:49Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-03-16T17:07:16Z" | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
model_name: llama-3-2-1b-trump-test-1-epochs
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama-3-2-1b-trump-test-1-epochs
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rkdaniels/llama-3-2-1b-trump-test-1-epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.4.1+cu124
- Datasets: 3.4.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
baby-dev/83c5e8a6-53fb-4b7a-93bc-bdcde3b3e735 | baby-dev | "2025-02-15T18:08:15Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.3",
"base_model:adapter:lmsys/vicuna-7b-v1.3",
"region:us"
] | null | "2025-02-15T17:44:19Z" | ---
library_name: peft
base_model: lmsys/vicuna-7b-v1.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 83c5e8a6-53fb-4b7a-93bc-bdcde3b3e735
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 83c5e8a6-53fb-4b7a-93bc-bdcde3b3e735
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Kromtao/KROMme_14_savcgfy | Kromtao | "2025-02-10T04:10:53Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-10T04:09:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sachinkelenjaguri/sa_distilbert-sentence-transformer | Sachinkelenjaguri | "2023-05-25T12:26:38Z" | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-05-25T12:21:48Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2813 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 843,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ema19/NeuralPipe-7B-slerp | ema19 | "2024-01-31T15:20:23Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-31T15:16:07Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ema19/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
fujie/fujie_studies_tts_finetune_vits_raw_phn_jaconv_pyopenjtalk_prosody_with_special_token | fujie | "2023-05-19T11:13:40Z" | 4 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"jp",
"dataset:studies_multi",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | "2023-05-19T11:12:12Z" | ---
tags:
- espnet
- audio
- text-to-speech
language: jp
datasets:
- studies_multi
license: cc-by-4.0
---
## ESPnet2 TTS model
### `fujie/fujie_studies_tts_finetune_vits_raw_phn_jaconv_pyopenjtalk_prosody_with_special_token`
This model was trained by Shinya Fujie using studies_multi recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 2219358fbd064d79214b12540afd498feaf49596
pip install -e .
cd egs2/studies_multi/tts1
./run.sh --skip_data_prep false --skip_train true --download_model fujie/fujie_studies_tts_finetune_vits_raw_phn_jaconv_pyopenjtalk_prosody_with_special_token
```
## TTS config
<details><summary>expand</summary>
```
config: ./conf/tuning/finetune_vits.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_finetune_vits_raw_phn_jaconv_pyopenjtalk_prosody_with_special_token
ngpu: 1
seed: 777
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 38263
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- total_count
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 50
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- downloads/models--espnet--kan-bayashi_jsut_vits_prosody/snapshots/3a859bfd2c9710846fa6244598000f0578a2d3e4/exp/tts_train_vits_raw_phn_jaconv_pyopenjtalk_prosody/train.total_count.ave_10best.pth
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_prosody_with_special_token/train/text_shape.phn
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_prosody_with_special_token/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_prosody_with_special_token/valid/text_shape.phn
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_prosody_with_special_token/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/22k/raw/ITA_tr_no_dev/text
- text
- text
- - dump/22k/raw/ITA_tr_no_dev/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/22k/raw/ITA_dev/text
- text
- text
- - dump/22k/raw/ITA_dev/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adamw
optim_conf:
lr: 0.0001
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler: exponentiallr
scheduler_conf:
gamma: 0.999875
optim2: adamw
optim2_conf:
lr: 0.0001
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler2: exponentiallr
scheduler2_conf:
gamma: 0.999875
generator_first: false
generator_only: false
token_list:
- <blank>
- <unk>
- a
- o
- i
- '['
- '#'
- u
- ']'
- e
- k
- n
- t
- r
- s
- N
- m
- _
- sh
- d
- g
- ^
- $
- w
- cl
- h
- y
- b
- j
- ts
- ch
- z
- p
- f
- ky
- ry
- gy
- hy
- ny
- by
- my
- py
- v
- dy
- '?'
- ty
- <happy>
- <angry>
- <sad>
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: jaconv
g2p: pyopenjtalk_prosody_with_special_token
feats_extract: linear_spectrogram
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
normalize: null
normalize_conf: {}
tts: vits
tts_conf:
generator_type: vits_generator
generator_params:
hidden_channels: 192
spks: -1
global_channels: -1
segment_size: 32
text_encoder_attention_heads: 2
text_encoder_ffn_expand: 4
text_encoder_blocks: 6
text_encoder_positionwise_layer_type: conv1d
text_encoder_positionwise_conv_kernel_size: 3
text_encoder_positional_encoding_layer_type: rel_pos
text_encoder_self_attention_layer_type: rel_selfattn
text_encoder_activation_type: swish
text_encoder_normalize_before: true
text_encoder_dropout_rate: 0.1
text_encoder_positional_dropout_rate: 0.0
text_encoder_attention_dropout_rate: 0.1
use_macaron_style_in_text_encoder: true
use_conformer_conv_in_text_encoder: false
text_encoder_conformer_kernel_size: -1
decoder_kernel_size: 7
decoder_channels: 512
decoder_upsample_scales:
- 8
- 8
- 2
- 2
decoder_upsample_kernel_sizes:
- 16
- 16
- 4
- 4
decoder_resblock_kernel_sizes:
- 3
- 7
- 11
decoder_resblock_dilations:
- - 1
- 3
- 5
- - 1
- 3
- 5
- - 1
- 3
- 5
use_weight_norm_in_decoder: true
posterior_encoder_kernel_size: 5
posterior_encoder_layers: 16
posterior_encoder_stacks: 1
posterior_encoder_base_dilation: 1
posterior_encoder_dropout_rate: 0.0
use_weight_norm_in_posterior_encoder: true
flow_flows: 4
flow_kernel_size: 5
flow_base_dilation: 1
flow_layers: 4
flow_dropout_rate: 0.0
use_weight_norm_in_flow: true
use_only_mean_in_flow: true
stochastic_duration_predictor_kernel_size: 3
stochastic_duration_predictor_dropout_rate: 0.5
stochastic_duration_predictor_flows: 4
stochastic_duration_predictor_dds_conv_layers: 3
vocabs: 50
aux_channels: 513
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: AvgPool1d
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 15
- 41
- 5
- 3
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: true
downsample_scales:
- 2
- 2
- 4
- 4
- 1
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
follow_official_norm: false
periods:
- 2
- 3
- 5
- 7
- 11
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 5
- 3
channels: 32
downsample_scales:
- 3
- 3
- 3
- 3
- 1
max_downsample_channels: 1024
bias: true
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
generator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
discriminator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
feat_match_loss_params:
average_by_discriminators: false
average_by_layers: false
include_final_outputs: true
mel_loss_params:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
window: hann
n_mels: 80
fmin: 0
fmax: null
log_base: null
lambda_adv: 1.0
lambda_mel: 45.0
lambda_feat_match: 2.0
lambda_dur: 1.0
lambda_kl: 1.0
sampling_rate: 22050
cache_generator_outputs: true
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202304'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
wzzju/Qwen2.5-Math-7B-GRPO | wzzju | "2025-03-14T02:35:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-13T11:24:28Z" | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
tags:
- generated_from_trainer
- open-r1
licence: license
---
# Model Card for None
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="wzzju/Qwen2.5-Math-7B-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/wzzjuer/open-r1-grpo-train/runs/pyna1sio)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mrferr3t/26e96b95-7a1c-4a5b-872f-f725abb141de | mrferr3t | "2025-04-14T06:47:35Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | "2025-04-14T00:09:42Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mradermacher/deepseek-r1-1.5b-indian-culture-GGUF | mradermacher | "2025-03-19T00:03:10Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"qwen2",
"sft",
"deepseek",
"indian-culture",
"en",
"dataset:deepkaria/indian-culture-dataset",
"base_model:deepkaria/deepseek-r1-1.5b-indian-culture",
"base_model:quantized:deepkaria/deepseek-r1-1.5b-indian-culture",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-18T23:49:16Z" | ---
base_model: deepkaria/deepseek-r1-1.5b-indian-culture
datasets:
- deepkaria/indian-culture-dataset
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- qwen2
- sft
- deepseek
- indian-culture
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/deepkaria/deepseek-r1-1.5b-indian-culture
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-1.5b-indian-culture-GGUF/resolve/main/deepseek-r1-1.5b-indian-culture.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-1.5b-indian-culture-GGUF/resolve/main/deepseek-r1-1.5b-indian-culture.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-1.5b-indian-culture-GGUF/resolve/main/deepseek-r1-1.5b-indian-culture.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-1.5b-indian-culture-GGUF/resolve/main/deepseek-r1-1.5b-indian-culture.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-1.5b-indian-culture-GGUF/resolve/main/deepseek-r1-1.5b-indian-culture.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-1.5b-indian-culture-GGUF/resolve/main/deepseek-r1-1.5b-indian-culture.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-1.5b-indian-culture-GGUF/resolve/main/deepseek-r1-1.5b-indian-culture.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-1.5b-indian-culture-GGUF/resolve/main/deepseek-r1-1.5b-indian-culture.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-1.5b-indian-culture-GGUF/resolve/main/deepseek-r1-1.5b-indian-culture.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-1.5b-indian-culture-GGUF/resolve/main/deepseek-r1-1.5b-indian-culture.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-1.5b-indian-culture-GGUF/resolve/main/deepseek-r1-1.5b-indian-culture.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-r1-1.5b-indian-culture-GGUF/resolve/main/deepseek-r1-1.5b-indian-culture.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nblt1998aakk/bi-lora-0.1-trained-xl | nblt1998aakk | "2025-01-22T04:00:41Z" | 9 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2025-01-22T03:35:51Z" | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a TOK icon
widget:
- text: a TOK icon of a flying bird, in the style of TOK
output:
url: image_0.png
- text: a TOK icon of a flying bird, in the style of TOK
output:
url: image_1.png
- text: a TOK icon of a flying bird, in the style of TOK
output:
url: image_2.png
- text: a TOK icon of a flying bird, in the style of TOK
output:
url: image_3.png
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - nblt1998aakk/bi-lora-0.1-trained-xl
<Gallery />
## Model description
These are nblt1998aakk/bi-lora-0.1-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a TOK icon to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](nblt1998aakk/bi-lora-0.1-trained-xl/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lesso11/3eeb5ed5-f5f9-4eae-9ee3-4013c4b4f871 | lesso11 | "2025-04-09T21:11:19Z" | 6 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | "2025-04-05T03:04:16Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
alvinwongster/LuminAI | alvinwongster | "2025-03-18T05:47:28Z" | 173 | 0 | null | [
"safetensors",
"llama",
"en",
"base_model:HuggingFaceTB/SmolLM2-360M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M-Instruct",
"region:us"
] | null | "2025-03-11T03:42:08Z" | ---
language:
- en
base_model:
- HuggingFaceTB/SmolLM2-360M-Instruct
---
# LuminAi

## Model Description
**Lumin.AI** is a supportive AI assistant designed to provide immediate emotional support to individuals outside of regular consulting hours. It acts as a supplementary tool for patients and therapists, ensuring that mental health care is more accessible and responsive to users' needs
## Model Demo


## Model Dataset
The chatbot has been trained using [conversational data](https://huggingface.co/alvinwongster/LuminAI/tree/main/dataset), which is supposed to mimick the patient and the therapist. 5 topics where chosen, and 100 conversations from each of these topics were gathered:
- General
- Relationships
- Insecurities
- Victim Mentality
- Self-Improvement
## How to use
```bash
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("alvinwongster/LuminAI")
model = AutoModelForCausalLM.from_pretrained("alvinwongster/LuminAI")
prompt = "What is depression?"
full_prompt = f"User: {prompt}\nBot:"
inputs = tokenizer(full_prompt, return_tensors="pt")
inputs = {key: val.to(device) for key, val in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=650,
repetition_penalty=1.3,
no_repeat_ngram_size=3,
temperature=0.8,
top_p=0.9,
top_k=50
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
if "Bot:" in response:
response = response.split("Bot:")[-1].strip()
print(response)
```
## Model Metrics
To evaluate the chatbot's performance based on our use case, the following weighted metrics system was used:
- Empathy Score (40%):
- Measures how well the chatbot responds with empathy.
- Human-Likeness Score (20%):
- Assesses how natural and human-like the responses feel.
- BERTScore (30%):
- Evaluates semantic similarity between chatbot replies and therapist responses. Split equally between F1, Recall and Precision
- Time taken (10%)
- Time taken to generate a response, a shorter time optimizes user experience
|Metrics |GPT |Llama|LuminAI|
|--------------------|:---:|:---:|:-----:|
|Empathy Score |0.8 |0.79 |0.79 |
|Human Likeness |0.27 |0.45 |0.5 |
|BERTScore F1 |0.45 |0.48 |0.51 |
|BERTScore Recall |0.51 |0.53 |0.55 |
|BERTScore Precision |0.41 |0.44 |0.47 |
|Time Taken |89.65|15.85|39.42 |
|Total Score |0.54 |0.65 |0.63 |
## Github Link
Visit [here](https://github.com/alvinnnnnnnnnn/MentalHealth-LLM) for more information on how I trained the model
Try the product [here](https://luminai-chatbot.streamlit.app/)! |
mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF | mradermacher | "2024-11-06T06:47:00Z" | 26 | 0 | transformers | [
"transformers",
"gguf",
"tr",
"en",
"base_model:Trendyol/Trendyol-LLM-7b-chat-v0.1",
"base_model:quantized:Trendyol/Trendyol-LLM-7b-chat-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-04T06:47:19Z" | ---
base_model: Trendyol/Trendyol-LLM-7b-chat-v0.1
language:
- tr
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.Q6_K.gguf) | Q6_K | 5.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Trendyol-LLM-7b-chat-v0.1-GGUF/resolve/main/Trendyol-LLM-7b-chat-v0.1.f16.gguf) | f16 | 13.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
superbigtree/Meta-Llama-3.1-8B-enhanced | superbigtree | "2024-09-30T22:36:46Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-30T22:34:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Faust95/DeepSeek-R1-Distill-Llama-8B-Math8k-GRPO-p1 | Faust95 | "2025-03-05T11:04:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-05T04:09:01Z" | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: DeepSeek-R1-Distill-Llama-8B-Math8k-GRPO-p1
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for DeepSeek-R1-Distill-Llama-8B-Math8k-GRPO-p1
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Faust95/DeepSeek-R1-Distill-Llama-8B-Math8k-GRPO-p1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zinuzian/huggingface/runs/i0j088cu)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/stabilityai_-_StableBeluga-7B-4bits | RichardErkhov | "2024-05-04T17:43:26Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2307.09288",
"arxiv:2306.02707",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-04T17:36:22Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
StableBeluga-7B - bnb 4bits
- Model creator: https://huggingface.co/stabilityai/
- Original model: https://huggingface.co/stabilityai/StableBeluga-7B/
Original model description:
---
datasets:
- conceptofmind/cot_submix_original
- conceptofmind/flan2021_submix_original
- conceptofmind/t0_submix_original
- conceptofmind/niv2_submix_original
language:
- en
pipeline_tag: text-generation
---
# Stable Beluga 7B
Use [Stable Chat (Research Preview)](https://chat.stability.ai/chat) to test Stability AI's best language models for free
## Model Description
`Stable Beluga 7B` is a Llama2 7B model finetuned on an Orca style Dataset
## Usage
Start chatting with `Stable Beluga 7B` using the following code snippet:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("stabilityai/StableBeluga-7B", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("stabilityai/StableBeluga-7B", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
system_prompt = "### System:\nYou are StableBeluga, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n"
message = "Write me a poem please"
prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
Stable Beluga 7B should be used with this prompt format:
```
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant:
The output of Stable Beluga 7B
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: Stable Beluga 7B is an auto-regressive language model fine-tuned on Llama2 7B.
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: Fine-tuned checkpoints (`Stable Beluga 7B`) is licensed under the [STABLE BELUGA NON-COMMERCIAL COMMUNITY LICENSE AGREEMENT](https://huggingface.co/stabilityai/StableBeluga-7B/blob/main/LICENSE.txt)
* **Contact**: For questions and comments about the model, please email `[email protected]`
### Training Dataset
` Stable Beluga 7B` is trained on our internal Orca-style dataset
### Training Procedure
Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (BF16), and optimized with AdamW. We outline the following hyperparameters:
| Dataset | Batch Size | Learning Rate |Learning Rate Decay| Warm-up | Weight Decay | Betas |
|-------------------|------------|---------------|-------------------|---------|--------------|-------------|
| Orca pt1 packed | 256 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
| Orca pt2 unpacked | 512 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
## Ethical Considerations and Limitations
Beluga is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Beluga's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Beluga, developers should perform safety testing and tuning tailored to their specific applications of the model.
## Citations
```bibtext
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtext
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
c-tawayip/mt5-small-Multitask-Thai-Text-Generator | c-tawayip | "2024-05-01T08:13:44Z" | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-06-13T09:15:47Z" |
Text Classification: {text}
Title Summarization: {text}
Abstract Summarization: {text}
Tags Suggestion: {text}
|
auxyus/a79ce298-c25d-4f0b-83ad-374f86af01aa | auxyus | "2025-02-06T20:54:06Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"region:us"
] | null | "2025-02-06T20:05:38Z" | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a79ce298-c25d-4f0b-83ad-374f86af01aa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5a05a6246bee5bf1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5a05a6246bee5bf1_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: auxyus/a79ce298-c25d-4f0b-83ad-374f86af01aa
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 400
micro_batch_size: 8
mlflow_experiment_name: /tmp/5a05a6246bee5bf1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 96ccb46d-771f-4801-9177-4f90f1a60c0d
wandb_project: Gradients-On-Two
wandb_run: your_name
wandb_runid: 96ccb46d-771f-4801-9177-4f90f1a60c0d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a79ce298-c25d-4f0b-83ad-374f86af01aa
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 321
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8063 | 0.0093 | 1 | 0.7730 |
| 0.3409 | 0.4673 | 50 | 0.2820 |
| 0.1443 | 0.9346 | 100 | 0.2110 |
| 0.1281 | 1.4019 | 150 | 0.1688 |
| 0.1053 | 1.8692 | 200 | 0.1361 |
| 0.0376 | 2.3364 | 250 | 0.1273 |
| 0.0218 | 2.8037 | 300 | 0.1182 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
johnsutor/Llama-3-8B-Instruct_breadcrumbs_ties-density-0.7-gamma-0.01 | johnsutor | "2024-06-07T21:23:06Z" | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:merge:DeepMount00/Llama-3-8b-Ita",
"base_model:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct",
"base_model:merge:VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct",
"base_model:failspy/Meta-Llama-3-8B-Instruct-abliterated-v3",
"base_model:merge:failspy/Meta-Llama-3-8B-Instruct-abliterated-v3",
"base_model:jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0",
"base_model:merge:jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:nbeerbower/llama-3-gutenberg-8B",
"base_model:merge:nbeerbower/llama-3-gutenberg-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-07T21:16:16Z" | ---
base_model:
- failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
- DeepMount00/Llama-3-8b-Ita
- nbeerbower/llama-3-gutenberg-8B
- jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
license: apache-2.0
tags:
- mergekit
- merge
---
# Model Merge Parameters
Base model: meta-llama/Meta-Llama-3-8B-Instruct
Models: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
DeepMount00/Llama-3-8b-Ita
nbeerbower/llama-3-gutenberg-8B
jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0
meta-llama/Meta-Llama-3-8B-Instruct
Merge method: breadcrumbs_ties
Random seed: 42
density: 0.7
gamma: 0.01
normalize: true
weight: 1.0
|
urarik/w2v-bert-2.0-Chinese-colab-CV16.0-aishell-ark-gs-vtb-ts-fs | urarik | "2025-03-01T07:18:32Z" | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-02-25T14:45:13Z" | ---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v-bert-2.0-Chinese-colab-CV16.0-aishell-ark-gs-vtb-ts-fs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-Chinese-colab-CV16.0-aishell-ark-gs-vtb-ts-fs
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6568
- Wer: 0.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.658 | 0.9981 | 180 | 0.7483 | 1.0 |
| 1.2313 | 1.9925 | 360 | 0.6796 | 0.9998 |
| 1.0232 | 2.9870 | 540 | 0.6568 | 0.9998 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu121
- Datasets 2.17.1
- Tokenizers 0.21.0
|
souvik0306/Eval_Quantised_facebook_opt_350m | souvik0306 | "2024-05-31T02:23:48Z" | 80 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | "2024-05-31T02:23:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kernelPanicAtTheDisco/NuExtract-2-2B-NFA | kernelPanicAtTheDisco | "2025-03-26T20:10:20Z" | 0 | 0 | null | [
"safetensors",
"internvl_chat",
"nlp",
"text-generation",
"conversational",
"custom_code",
"multilingual",
"base_model:OpenGVLab/InternVL2_5-2B",
"base_model:finetune:OpenGVLab/InternVL2_5-2B",
"license:mit",
"region:us"
] | text-generation | "2025-03-26T20:09:38Z" | ---
license: mit
language:
- multilingual
tags:
- nlp
base_model: OpenGVLab/InternVL2_5-2B
pipeline_tag: text-generation
inference: true
---
# NuExtract-2-2B by NuMind 🔥
NuExtract 2.0 is a family of models trained specifically for structured information extraction tasks. It supports both multimodal inputs and is multilingual.
We provide several versions of different sizes, all based on the InternVL2.5 family.
| Model Size | Model Name | Base Model | Huggingface Link |
|------------|------------|------------|------------------|
| 2B | NuExtract-2.0-2B | [InternVL2_5-2B](https://huggingface.co/OpenGVLab/InternVL2_5-2B) | [NuExtract-2-2B](https://huggingface.co/numind/NuExtract-2-2B) |
| 4B | NuExtract-2.0-4B | [InternVL2_5-4B](https://huggingface.co/OpenGVLab/InternVL2_5-4B) | [NuExtract-2-4B](https://huggingface.co/numind/NuExtract-2-4B) |
| 8B | NuExtract-2.0-8B | [InternVL2_5-8B](https://huggingface.co/OpenGVLab/InternVL2_5-8B) | [NuExtract-2-8B](https://huggingface.co/numind/NuExtract-2-8B) |
## Overview
To use the model, provide an input text/image and a JSON template describing the information you need to extract. The template should be a JSON object, specifying field names and their expected type.
Support types include:
* `verbatim-string` - instructs the model to extract text that is present verbatim in the input.
* `string` - a generic string field that can incorporate paraphrasing/abstraction.
* `integer` - a whole number.
* `number` - a whole or decimal number.
* `date-time` - ISO formatted date.
* Array of any of the above types (e.g. `["string"]`)
* `enum` - a choice from set of possible answers (represented in template as an array of options, e.g. `["yes", "no", "maybe"]`).
* `multi-label` - an enum that can have multiple possible answers (represented in template as a double-wrapped array, e.g. `[["A", "B", "C"]]`).
If the model does not identify relevant information for a field, it will return `null` or `[]` (for arrays and multi-labels).
The following is an example template:
```json
{
"first_name": "verbatim-string",
"last_name": "verbatim-string",
"description": "string",
"age": "integer",
"gpa": "number",
"birth_date": "date-time",
"nationality": ["France", "England", "Japan", "USA", "China"],
"languages_spoken": [["English", "French", "Japanese", "Mandarin", "Spanish"]]
}
```
An example output:
```json
{
"first_name": "Susan",
"last_name": "Smith",
"description": "A student studying computer science.",
"age": 20,
"gpa": 3.7,
"birth_date": "2005-03-01",
"nationality": "England",
"languages_spoken": ["English", "French"]
}
```
⚠️ We recommend using NuExtract with a temperature at or very close to 0. Some inference frameworks, such as Ollama, use a default of 0.7 which is not well suited to many extraction tasks.
## Inference
Use the following code to handle loading and preprocessing of input data:
```python
import torch
import torchvision.transforms as T
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
def prepare_inputs(messages, image_paths, tokenizer, device='cuda', dtype=torch.bfloat16):
"""
Prepares multi-modal input components (supports multiple images per prompt).
Args:
messages: List of input messages/prompts (strings or dicts with 'role' and 'content')
image_paths: List where each element is either None (for text-only) or a list of image paths
tokenizer: The tokenizer to use for applying chat templates
device: Device to place tensors on ('cuda', 'cpu', etc.)
dtype: Data type for image tensors (default: torch.bfloat16)
Returns:
dict: Contains 'prompts', 'pixel_values_list', and 'num_patches_list' ready for the model
"""
# Make sure image_paths list is at least as long as messages
if len(image_paths) < len(messages):
# Pad with None for text-only messages
image_paths = image_paths + [None] * (len(messages) - len(image_paths))
# Process images and collect patch information
loaded_images = []
num_patches_list = []
for paths in image_paths:
if paths and isinstance(paths, list) and len(paths) > 0:
# Load each image in this prompt
prompt_images = []
prompt_patches = []
for path in paths:
# Load the image
img = load_image(path).to(dtype=dtype, device=device)
# Ensure img has correct shape [patches, C, H, W]
if len(img.shape) == 3: # [C, H, W] -> [1, C, H, W]
img = img.unsqueeze(0)
prompt_images.append(img)
# Record the number of patches for this image
prompt_patches.append(img.shape[0])
loaded_images.append(prompt_images)
num_patches_list.append(prompt_patches)
else:
# Text-only prompt
loaded_images.append(None)
num_patches_list.append([])
# Create the concatenated pixel_values_list
pixel_values_list = []
for prompt_images in loaded_images:
if prompt_images:
# Concatenate all images for this prompt
pixel_values_list.append(torch.cat(prompt_images, dim=0))
else:
# Text-only prompt
pixel_values_list.append(None)
# Format messages for the model
if all(isinstance(m, str) for m in messages):
# Simple string messages: convert to chat format
batch_messages = [
[{"role": "user", "content": message}]
for message in messages
]
else:
# Assume messages are already in the right format
batch_messages = messages
# Apply chat template
prompts = tokenizer.apply_chat_template(
batch_messages,
tokenize=False,
add_generation_prompt=True
)
return {
'prompts': prompts,
'pixel_values_list': pixel_values_list,
'num_patches_list': num_patches_list
}
def construct_message(text, template, examples=None):
"""
Construct the individual NuExtract message texts, prior to chat template formatting.
"""
# add few-shot examples if needed
if examples is not None and len(examples) > 0:
icl = "# Examples:\n"
for row in examples:
icl += f"## Input:\n{row['input']}\n## Output:\n{row['output']}\n"
else:
icl = ""
return f"""# Template:\n{template}\n{icl}# Context:\n{text}"""
```
To handle inference:
```python
IMG_START_TOKEN='<img>'
IMG_END_TOKEN='</img>'
IMG_CONTEXT_TOKEN='<IMG_CONTEXT>'
def nuextract_generate(model, tokenizer, prompts, generation_config, pixel_values_list=None, num_patches_list=None):
"""
Generate responses for a batch of NuExtract inputs.
Support for multiple and varying numbers of images per prompt.
Args:
model: The vision-language model
tokenizer: The tokenizer for the model
pixel_values_list: List of tensor batches, one per prompt
Each batch has shape [num_images, channels, height, width] or None for text-only prompts
prompts: List of text prompts
generation_config: Configuration for text generation
num_patches_list: List of lists, each containing patch counts for images in a prompt
Returns:
List of generated responses
"""
img_context_token_id = tokenizer.convert_tokens_to_ids(IMG_CONTEXT_TOKEN)
model.img_context_token_id = img_context_token_id
# Replace all image placeholders with appropriate tokens
modified_prompts = []
total_image_files = 0
total_patches = 0
image_containing_prompts = []
for idx, prompt in enumerate(prompts):
# check if this prompt has images
has_images = (pixel_values_list and
idx < len(pixel_values_list) and
pixel_values_list[idx] is not None and
isinstance(pixel_values_list[idx], torch.Tensor) and
pixel_values_list[idx].shape[0] > 0)
if has_images:
# prompt with image placeholders
image_containing_prompts.append(idx)
modified_prompt = prompt
patches = num_patches_list[idx] if (num_patches_list and idx < len(num_patches_list)) else []
num_images = len(patches)
total_image_files += num_images
total_patches += sum(patches)
# replace each <image> placeholder with image tokens
for i, num_patches in enumerate(patches):
image_tokens = IMG_START_TOKEN + IMG_CONTEXT_TOKEN * model.num_image_token * num_patches + IMG_END_TOKEN
modified_prompt = modified_prompt.replace('<image>', image_tokens, 1)
else:
# text-only prompt
modified_prompt = prompt
modified_prompts.append(modified_prompt)
# process all prompts in a single batch
tokenizer.padding_side = 'left'
model_inputs = tokenizer(modified_prompts, return_tensors='pt', padding=True)
input_ids = model_inputs['input_ids'].to(model.device)
attention_mask = model_inputs['attention_mask'].to(model.device)
eos_token_id = tokenizer.convert_tokens_to_ids("<|im_end|>\n".strip())
generation_config['eos_token_id'] = eos_token_id
# prepare pixel values
flattened_pixel_values = None
if image_containing_prompts:
# collect and concatenate all image tensors
all_pixel_values = []
for idx in image_containing_prompts:
all_pixel_values.append(pixel_values_list[idx])
flattened_pixel_values = torch.cat(all_pixel_values, dim=0)
print(f"Processing batch with {len(prompts)} prompts, {total_image_files} actual images, and {total_patches} total patches")
else:
print(f"Processing text-only batch with {len(prompts)} prompts")
# generate outputs
outputs = model.generate(
pixel_values=flattened_pixel_values, # will be None for text-only prompts
input_ids=input_ids,
attention_mask=attention_mask,
**generation_config
)
# Decode responses
responses = tokenizer.batch_decode(outputs, skip_special_tokens=True)
return responses
```
To load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = ""
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, padding_side='left')
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2" # we recommend using flash attention
).to("cuda")
```
Simple 0-shot text-only example:
```python
template = """{"names": ["verbatim-string"]}"""
text = "John went to the restaurant with Mary. James went to the cinema."
input_messages = [construct_message(text, template)]
input_content = prepare_inputs(
messages=input_messages,
image_paths=[],
tokenizer=tokenizer,
)
generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
with torch.no_grad():
result = nuextract_generate(
model=model,
tokenizer=tokenizer,
prompts=input_content['prompts'],
pixel_values_list=input_content['pixel_values_list'],
num_patches_list=input_content['num_patches_list'],
generation_config=generation_config
)
for y in result:
print(y)
# {"names": ["John", "Mary", "James"]}
```
Text-only input with an in-context example:
```python
template = """{"names": ["verbatim-string"], "female_names": ["verbatim-string"]}"""
text = "John went to the restaurant with Mary. James went to the cinema."
examples = [
{
"input": "Stephen is the manager at Susan's store.",
"output": """{"names": ["STEPHEN", "SUSAN"], "female_names": ["SUSAN"]}"""
}
]
input_messages = [construct_message(text, template, examples)]
input_content = prepare_inputs(
messages=input_messages,
image_paths=[],
tokenizer=tokenizer,
)
generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
with torch.no_grad():
result = nuextract_generate(
model=model,
tokenizer=tokenizer,
prompts=input_content['prompts'],
pixel_values_list=input_content['pixel_values_list'],
num_patches_list=input_content['num_patches_list'],
generation_config=generation_config
)
for y in result:
print(y)
# {"names": ["JOHN", "MARY", "JAMES"], "female_names": ["MARY"]}
```
Example with image input and an in-context example. Image inputs should use `<image>` placeholder instead of text and image paths should be provided in a list in order of appearance in the prompt (in this example `0.jpg` will be for the in-context example and `1.jpg` for the true input).
```python
template = """{"store": "verbatim-string"}"""
text = "<image>"
examples = [
{
"input": "<image>",
"output": """{"store": "Walmart"}"""
}
]
input_messages = [construct_message(text, template, examples)]
images = [
["0.jpg", "1.jpg"]
]
input_content = prepare_inputs(
messages=input_messages,
image_paths=images,
tokenizer=tokenizer,
)
generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
with torch.no_grad():
result = nuextract_generate(
model=model,
tokenizer=tokenizer,
prompts=input_content['prompts'],
pixel_values_list=input_content['pixel_values_list'],
num_patches_list=input_content['num_patches_list'],
generation_config=generation_config
)
for y in result:
print(y)
# {"store": "Trader Joe's"}
```
Multi-modal batched input:
```python
inputs = [
# image input with no ICL examples
{
"text": "<image>",
"template": """{"store_name": "verbatim-string"}""",
"examples": None,
},
# image input with 1 ICL example
{
"text": "<image>",
"template": """{"store_name": "verbatim-string"}""",
"examples": [
{
"input": "<image>",
"output": """{"store_name": "Walmart"}""",
}
],
},
# text input with no ICL examples
{
"text": "John went to the restaurant with Mary. James went to the cinema.",
"template": """{"names": ["verbatim-string"]}""",
"examples": None,
},
# text input with ICL example
{
"text": "John went to the restaurant with Mary. James went to the cinema.",
"template": """{"names": ["verbatim-string"], "female_names": ["verbatim-string"]}""",
"examples": [
{
"input": "Stephen is the manager at Susan's store.",
"output": """{"names": ["STEPHEN", "SUSAN"], "female_names": ["SUSAN"]}"""
}
],
},
]
input_messages = [
construct_message(
x["text"],
x["template"],
x["examples"]
) for x in inputs
]
images = [
["0.jpg"],
["0.jpg", "1.jpg"],
None,
None
]
input_content = prepare_inputs(
messages=input_messages,
image_paths=images,
tokenizer=tokenizer,
)
generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
with torch.no_grad():
result = nuextract_generate(
model=model,
tokenizer=tokenizer,
prompts=input_content['prompts'],
pixel_values_list=input_content['pixel_values_list'],
num_patches_list=input_content['num_patches_list'],
generation_config=generation_config
)
for y in result:
print(y)
# {"store_name": "WAL*MART"}
# {"store_name": "Trader Joe's"}
# {"names": ["John", "Mary", "James"]}
# {"names": ["JOHN", "MARY", "JAMES"], "female_names": ["MARY"]}
```
## Template Generation
If you want to convert existing schema files you have in other formats (e.g. XML, YAML, etc.) or start from an example, NuExtract 2 models can automatically generate this for you.
E.g. convert XML into a NuExtract template:
```python
def generate_template(description):
input_messages = [description]
input_content = prepare_inputs(
messages=input_messages,
image_paths=[],
tokenizer=tokenizer,
)
generation_config = {"do_sample": True, "temperature": 0.4, "max_new_tokens": 256}
with torch.no_grad():
result = nuextract_generate(
model=model,
tokenizer=tokenizer,
prompts=input_content['prompts'],
pixel_values_list=input_content['pixel_values_list'],
num_patches_list=input_content['num_patches_list'],
generation_config=generation_config
)
return result[0]
xml_template = """<SportResult>
<Date></Date>
<Sport></Sport>
<Venue></Venue>
<HomeTeam></HomeTeam>
<AwayTeam></AwayTeam>
<HomeScore></HomeScore>
<AwayScore></AwayScore>
<TopScorer></TopScorer>
</SportResult>"""
result = generate_template(xml_template)
print(result)
# {
# "SportResult": {
# "Date": "date-time",
# "Sport": "verbatim-string",
# "Venue": "verbatim-string",
# "HomeTeam": "verbatim-string",
# "AwayTeam": "verbatim-string",
# "HomeScore": "integer",
# "AwayScore": "integer",
# "TopScorer": "verbatim-string"
# }
# }
```
E.g. generate a template from natural language description:
```python
text = """Give me relevant info about startup companies mentioned."""
result = generate_template(text)
print(result)
# {
# "Startup_Companies": [
# {
# "Name": "verbatim-string",
# "Products": [
# "string"
# ],
# "Location": "verbatim-string",
# "Company_Type": [
# "Technology",
# "Finance",
# "Health",
# "Education",
# "Other"
# ]
# }
# ]
# }
``` |
godofmining/explorer_v1 | godofmining | "2025-02-25T23:19:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-25T23:17:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
janny127/autotrain-5e45b-p5z66 | janny127 | "2024-04-01T21:29:11Z" | 97 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-01T21:28:20Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
hkivancoral/hushem_40x_deit_base_rms_00001_fold3 | hkivancoral | "2023-12-24T18:03:31Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-24T17:25:29Z" | ---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_rms_00001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9069767441860465
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_rms_00001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8836
- Accuracy: 0.9070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0092 | 1.0 | 217 | 0.3600 | 0.9070 |
| 0.0049 | 2.0 | 434 | 0.6644 | 0.8837 |
| 0.0002 | 3.0 | 651 | 0.6352 | 0.8605 |
| 0.0002 | 4.0 | 868 | 0.4194 | 0.8372 |
| 0.0 | 5.0 | 1085 | 0.4806 | 0.8837 |
| 0.0 | 6.0 | 1302 | 0.4943 | 0.8837 |
| 0.0 | 7.0 | 1519 | 0.5374 | 0.8837 |
| 0.0 | 8.0 | 1736 | 0.5739 | 0.8837 |
| 0.0 | 9.0 | 1953 | 0.6244 | 0.8837 |
| 0.0 | 10.0 | 2170 | 0.6958 | 0.8837 |
| 0.0 | 11.0 | 2387 | 0.7044 | 0.8837 |
| 0.0 | 12.0 | 2604 | 0.7420 | 0.8837 |
| 0.0 | 13.0 | 2821 | 0.7779 | 0.8837 |
| 0.0 | 14.0 | 3038 | 0.8260 | 0.8837 |
| 0.0 | 15.0 | 3255 | 0.8100 | 0.8837 |
| 0.0 | 16.0 | 3472 | 0.8334 | 0.8837 |
| 0.0 | 17.0 | 3689 | 0.8315 | 0.8837 |
| 0.0 | 18.0 | 3906 | 0.8407 | 0.8837 |
| 0.0 | 19.0 | 4123 | 0.8449 | 0.8837 |
| 0.0 | 20.0 | 4340 | 0.8517 | 0.9070 |
| 0.0 | 21.0 | 4557 | 0.8539 | 0.9070 |
| 0.0 | 22.0 | 4774 | 0.8566 | 0.9070 |
| 0.0 | 23.0 | 4991 | 0.8670 | 0.9070 |
| 0.0 | 24.0 | 5208 | 0.8582 | 0.9070 |
| 0.0 | 25.0 | 5425 | 0.8799 | 0.9070 |
| 0.0 | 26.0 | 5642 | 0.8723 | 0.9070 |
| 0.0 | 27.0 | 5859 | 0.8712 | 0.9070 |
| 0.0 | 28.0 | 6076 | 0.8721 | 0.9070 |
| 0.0 | 29.0 | 6293 | 0.8741 | 0.9070 |
| 0.0 | 30.0 | 6510 | 0.8806 | 0.9070 |
| 0.0 | 31.0 | 6727 | 0.8875 | 0.9070 |
| 0.0 | 32.0 | 6944 | 0.8790 | 0.9070 |
| 0.0 | 33.0 | 7161 | 0.8844 | 0.9070 |
| 0.0 | 34.0 | 7378 | 0.8840 | 0.9070 |
| 0.0 | 35.0 | 7595 | 0.8876 | 0.9070 |
| 0.0 | 36.0 | 7812 | 0.8874 | 0.9070 |
| 0.0 | 37.0 | 8029 | 0.8892 | 0.9070 |
| 0.0 | 38.0 | 8246 | 0.8786 | 0.9070 |
| 0.0 | 39.0 | 8463 | 0.8835 | 0.9070 |
| 0.0 | 40.0 | 8680 | 0.8858 | 0.9070 |
| 0.0 | 41.0 | 8897 | 0.8804 | 0.9070 |
| 0.0 | 42.0 | 9114 | 0.8847 | 0.9070 |
| 0.0 | 43.0 | 9331 | 0.8839 | 0.9070 |
| 0.0 | 44.0 | 9548 | 0.8847 | 0.9070 |
| 0.0 | 45.0 | 9765 | 0.8817 | 0.9070 |
| 0.0 | 46.0 | 9982 | 0.8847 | 0.9070 |
| 0.0 | 47.0 | 10199 | 0.8836 | 0.9070 |
| 0.0 | 48.0 | 10416 | 0.8831 | 0.9070 |
| 0.0 | 49.0 | 10633 | 0.8834 | 0.9070 |
| 0.0 | 50.0 | 10850 | 0.8836 | 0.9070 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Clawoo/ppo-LunarLander-v2u1 | Clawoo | "2023-02-15T18:33:04Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-15T18:32:38Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.09 +/- 20.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ariffiq99/e_care_xlm_roberta_base_finetuned | Ariffiq99 | "2024-05-31T05:48:04Z" | 105 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-05-31T05:47:02Z" | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: e_care_xlm_roberta_base_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# e_care_xlm_roberta_base_finetuned
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6184
- F1: 0.7208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6735 | 1.0 | 933 | 0.5926 | 0.6639 |
| 0.6001 | 2.0 | 1866 | 0.5624 | 0.6952 |
| 0.5482 | 3.0 | 2799 | 0.5460 | 0.7025 |
| 0.4975 | 4.0 | 3732 | 0.5534 | 0.7122 |
| 0.4487 | 5.0 | 4665 | 0.5646 | 0.7205 |
| 0.4091 | 6.0 | 5598 | 0.5910 | 0.7185 |
| 0.3674 | 7.0 | 6531 | 0.6184 | 0.7208 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
thiagodepaulo/mT5-pt-tca | thiagodepaulo | "2024-06-16T19:17:07Z" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"text-generation-inference",
"pt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-16T18:35:24Z" | ---
license: mit
language:
- pt
tags:
- text-generation-inference
widget:
- text: "E disse-lhe Pedro: Enéias, Jesus Cristo te dá saúde"
--- |
mradermacher/gpt2-health-qa-GGUF | mradermacher | "2025-03-01T02:01:49Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:SuYee189/gpt2-health-qa",
"base_model:quantized:SuYee189/gpt2-health-qa",
"endpoints_compatible",
"region:us"
] | null | "2025-03-01T01:25:09Z" | ---
base_model: SuYee189/gpt2-health-qa
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SuYee189/gpt2-health-qa
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gpt2-health-qa-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt2-health-qa-GGUF/resolve/main/gpt2-health-qa.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-health-qa-GGUF/resolve/main/gpt2-health-qa.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-health-qa-GGUF/resolve/main/gpt2-health-qa.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-health-qa-GGUF/resolve/main/gpt2-health-qa.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-health-qa-GGUF/resolve/main/gpt2-health-qa.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-health-qa-GGUF/resolve/main/gpt2-health-qa.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-health-qa-GGUF/resolve/main/gpt2-health-qa.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-health-qa-GGUF/resolve/main/gpt2-health-qa.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-health-qa-GGUF/resolve/main/gpt2-health-qa.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-health-qa-GGUF/resolve/main/gpt2-health-qa.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-health-qa-GGUF/resolve/main/gpt2-health-qa.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-health-qa-GGUF/resolve/main/gpt2-health-qa.f16.gguf) | f16 | 0.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
itlwas/SmolLM-135M-Q4_K_M-GGUF | itlwas | "2024-12-29T12:59:21Z" | 11 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:HuggingFaceTB/smollm-corpus",
"base_model:HuggingFaceTB/SmolLM-135M",
"base_model:quantized:HuggingFaceTB/SmolLM-135M",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-29T12:59:19Z" | ---
library_name: transformers
license: apache-2.0
language:
- en
datasets:
- HuggingFaceTB/smollm-corpus
base_model: HuggingFaceTB/SmolLM-135M
tags:
- llama-cpp
- gguf-my-repo
---
# AIronMind/SmolLM-135M-Q4_K_M-GGUF
This model was converted to GGUF format from [`HuggingFaceTB/SmolLM-135M`](https://huggingface.co/HuggingFaceTB/SmolLM-135M) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolLM-135M) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo AIronMind/SmolLM-135M-Q4_K_M-GGUF --hf-file smollm-135m-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo AIronMind/SmolLM-135M-Q4_K_M-GGUF --hf-file smollm-135m-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo AIronMind/SmolLM-135M-Q4_K_M-GGUF --hf-file smollm-135m-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo AIronMind/SmolLM-135M-Q4_K_M-GGUF --hf-file smollm-135m-q4_k_m.gguf -c 2048
```
|
mlx-community/Qwen2.5-Coder-32B-Instruct-8bit | mlx-community | "2024-11-11T20:00:37Z" | 181 | 8 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"mlx",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | "2024-11-11T18:28:37Z" | ---
base_model: Qwen/Qwen2.5-Coder-32B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- mlx
---
# mlx-community/Qwen2.5-Coder-32B-Instruct-8bit
The Model [mlx-community/Qwen2.5-Coder-32B-Instruct-8bit](https://huggingface.co/mlx-community/Qwen2.5-Coder-32B-Instruct-8bit) was converted to MLX format from [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) using mlx-lm version **0.19.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen2.5-Coder-32B-Instruct-8bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
oyemade/w2v-bert-2.0-igbo-CV17.0 | oyemade | "2024-07-11T00:15:15Z" | 20 | 2 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"ig",
"dataset:common_voice_17_0",
"base_model:oyemade/w2v-bert-2.0-igbo-CV17.0",
"base_model:finetune:oyemade/w2v-bert-2.0-igbo-CV17.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-06T18:47:40Z" | ---
base_model: oyemade/w2v-bert-2.0-igbo-CV17.0
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
model-index:
- name: w2v-bert-2.0-igbo-CV17.0
results: []
language:
- ig
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-igbo-CV17.0
This model is a fine-tuned version of [oyemade/w2v-bert-2.0-igbo-CV17.0](https://huggingface.co/oyemade/w2v-bert-2.0-igbo-CV17.0) on the common_voice_17_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1 |
andre-fichel/clearpolicy-llama-3-8binstruct-v3 | andre-fichel | "2024-05-16T18:22:08Z" | 4 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:adapter:NousResearch/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | "2024-05-16T18:16:28Z" | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: NousResearch/Meta-Llama-3-8B-Instruct
datasets:
- generator
model-index:
- name: clearpolicy-llama-3-8binstruct-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clearpolicy-llama-3-8binstruct-v3
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.11.0
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf | RichardErkhov | "2025-02-12T03:39:28Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-12T03:29:32Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gita-text-generation-gpt2 - GGUF
- Model creator: https://huggingface.co/leelamca/
- Original model: https://huggingface.co/leelamca/gita-text-generation-gpt2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gita-text-generation-gpt2.Q2_K.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q2_K.gguf) | Q2_K | 0.08GB |
| [gita-text-generation-gpt2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [gita-text-generation-gpt2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [gita-text-generation-gpt2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [gita-text-generation-gpt2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.IQ3_M.gguf) | IQ3_M | 0.09GB |
| [gita-text-generation-gpt2.Q3_K.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q3_K.gguf) | Q3_K | 0.09GB |
| [gita-text-generation-gpt2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [gita-text-generation-gpt2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
| [gita-text-generation-gpt2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.IQ4_XS.gguf) | IQ4_XS | 0.1GB |
| [gita-text-generation-gpt2.Q4_0.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q4_0.gguf) | Q4_0 | 0.1GB |
| [gita-text-generation-gpt2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [gita-text-generation-gpt2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [gita-text-generation-gpt2.Q4_K.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q4_K.gguf) | Q4_K | 0.11GB |
| [gita-text-generation-gpt2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q4_K_M.gguf) | Q4_K_M | 0.11GB |
| [gita-text-generation-gpt2.Q4_1.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q4_1.gguf) | Q4_1 | 0.11GB |
| [gita-text-generation-gpt2.Q5_0.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q5_0.gguf) | Q5_0 | 0.11GB |
| [gita-text-generation-gpt2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [gita-text-generation-gpt2.Q5_K.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q5_K.gguf) | Q5_K | 0.12GB |
| [gita-text-generation-gpt2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [gita-text-generation-gpt2.Q5_1.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q5_1.gguf) | Q5_1 | 0.12GB |
| [gita-text-generation-gpt2.Q6_K.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q6_K.gguf) | Q6_K | 0.13GB |
| [gita-text-generation-gpt2.Q8_0.gguf](https://huggingface.co/RichardErkhov/leelamca_-_gita-text-generation-gpt2-gguf/blob/main/gita-text-generation-gpt2.Q8_0.gguf) | Q8_0 | 0.17GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
denbeo/621684de-2bb9-4b3a-8cd6-f19bc49a39e4 | denbeo | "2025-01-29T14:22:07Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-29T13:47:13Z" | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 621684de-2bb9-4b3a-8cd6-f19bc49a39e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4e5d28c84ed48740_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4e5d28c84ed48740_train_data.json
type:
field_input: category
field_instruction: combo
field_output: comment_sentence
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: denbeo/621684de-2bb9-4b3a-8cd6-f19bc49a39e4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/4e5d28c84ed48740_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3c1095c2-b546-43be-a7c3-df9082be4f57
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3c1095c2-b546-43be-a7c3-df9082be4f57
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 621684de-2bb9-4b3a-8cd6-f19bc49a39e4
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0401 | 200 | 0.0052 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/TRP-BASE-SCE-V1-70B-GGUF | mradermacher | "2025-03-08T07:28:48Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksLab/TRP-BASE-SCE-V1-70B",
"base_model:quantized:TareksLab/TRP-BASE-SCE-V1-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-08T06:16:20Z" | ---
base_model: TareksLab/TRP-BASE-SCE-V1-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TareksLab/TRP-BASE-SCE-V1-70B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TRP-BASE-SCE-V1-70B-GGUF/resolve/main/TRP-BASE-SCE-V1-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/TRP-BASE-SCE-V1-70B-GGUF/resolve/main/TRP-BASE-SCE-V1-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/TRP-BASE-SCE-V1-70B-GGUF/resolve/main/TRP-BASE-SCE-V1-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TRP-BASE-SCE-V1-70B-GGUF/resolve/main/TRP-BASE-SCE-V1-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/TRP-BASE-SCE-V1-70B-GGUF/resolve/main/TRP-BASE-SCE-V1-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/TRP-BASE-SCE-V1-70B-GGUF/resolve/main/TRP-BASE-SCE-V1-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TRP-BASE-SCE-V1-70B-GGUF/resolve/main/TRP-BASE-SCE-V1-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TRP-BASE-SCE-V1-70B-GGUF/resolve/main/TRP-BASE-SCE-V1-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/TRP-BASE-SCE-V1-70B-GGUF/resolve/main/TRP-BASE-SCE-V1-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/TRP-BASE-SCE-V1-70B-GGUF/resolve/main/TRP-BASE-SCE-V1-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TRP-BASE-SCE-V1-70B-GGUF/resolve/main/TRP-BASE-SCE-V1-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/TRP-BASE-SCE-V1-70B-GGUF/resolve/main/TRP-BASE-SCE-V1-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/TRP-BASE-SCE-V1-70B-GGUF/resolve/main/TRP-BASE-SCE-V1-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mlx-community/mixtral-8x22b-instruct-oh-4bit | mlx-community | "2024-04-17T23:06:04Z" | 5 | 0 | mlx | [
"mlx",
"safetensors",
"mixtral",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:mistral-community/Mixtral-8x22B-v0.1",
"base_model:finetune:mistral-community/Mixtral-8x22B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-04-12T05:26:13Z" | ---
language:
- en
license: apache-2.0
tags:
- mlx
datasets:
- teknium/OpenHermes-2.5
base_model: mistral-community/Mixtral-8x22B-v0.1
---
# mlx-community/mixtral-8x22b-instruct-oh-4bit
This model was converted to MLX format from [`fireworks-ai/mixtral-8x22b-instruct-oh`]() using mlx-lm version **0.9.0**.
Model added by [Prince Canuma](https://twitter.com/Prince_Canuma).
Refer to the [original model card](https://huggingface.co/fireworks-ai/mixtral-8x22b-instruct-oh) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/mixtral-8x22b-instruct-oh-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
vrajur/dqn-SpaceInvadersNoFrameskip-v4 | vrajur | "2023-08-03T14:28:38Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-03T14:28:00Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 643.00 +/- 302.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vrajur -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vrajur -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga vrajur
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
apwic/nerui-lora-r8-0 | apwic | "2024-06-04T01:26:47Z" | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | "2024-05-28T12:12:41Z" | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: nerui-lora-r8-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nerui-lora-r8-0
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0463
- Location Precision: 0.8462
- Location Recall: 0.9362
- Location F1: 0.8889
- Location Number: 94
- Organization Precision: 0.8667
- Organization Recall: 0.8563
- Organization F1: 0.8614
- Organization Number: 167
- Person Precision: 1.0
- Person Recall: 0.9854
- Person F1: 0.9926
- Person Number: 137
- Overall Precision: 0.9059
- Overall Recall: 0.9196
- Overall F1: 0.9127
- Overall Accuracy: 0.9848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Location Precision | Location Recall | Location F1 | Location Number | Organization Precision | Organization Recall | Organization F1 | Organization Number | Person Precision | Person Recall | Person F1 | Person Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:----------------:|:-------------:|:---------:|:-------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.1434 | 1.0 | 96 | 0.7069 | 0.0 | 0.0 | 0.0 | 94 | 0.0 | 0.0 | 0.0 | 167 | 0.0 | 0.0 | 0.0 | 137 | 0.0 | 0.0 | 0.0 | 0.8343 |
| 0.6699 | 2.0 | 192 | 0.5760 | 0.0 | 0.0 | 0.0 | 94 | 1.0 | 0.0060 | 0.0119 | 167 | 0.0 | 0.0 | 0.0 | 137 | 0.25 | 0.0025 | 0.0050 | 0.8348 |
| 0.5654 | 3.0 | 288 | 0.4641 | 0.0 | 0.0 | 0.0 | 94 | 0.4118 | 0.0419 | 0.0761 | 167 | 0.2414 | 0.0511 | 0.0843 | 137 | 0.3043 | 0.0352 | 0.0631 | 0.8420 |
| 0.4481 | 4.0 | 384 | 0.3466 | 0.2353 | 0.0426 | 0.0721 | 94 | 0.3578 | 0.2335 | 0.2826 | 167 | 0.3774 | 0.4380 | 0.4054 | 137 | 0.3614 | 0.2588 | 0.3016 | 0.8793 |
| 0.3376 | 5.0 | 480 | 0.2613 | 0.4058 | 0.2979 | 0.3436 | 94 | 0.5105 | 0.5808 | 0.5434 | 167 | 0.5081 | 0.6861 | 0.5839 | 137 | 0.4932 | 0.5503 | 0.5202 | 0.9202 |
| 0.2611 | 6.0 | 576 | 0.2025 | 0.5909 | 0.5532 | 0.5714 | 94 | 0.5588 | 0.6826 | 0.6146 | 167 | 0.6905 | 0.8467 | 0.7607 | 137 | 0.6130 | 0.7085 | 0.6573 | 0.9406 |
| 0.2071 | 7.0 | 672 | 0.1615 | 0.7021 | 0.7021 | 0.7021 | 94 | 0.6649 | 0.7605 | 0.7095 | 167 | 0.8224 | 0.9124 | 0.8651 | 137 | 0.7277 | 0.7990 | 0.7617 | 0.9555 |
| 0.1767 | 8.0 | 768 | 0.1337 | 0.7872 | 0.7872 | 0.7872 | 94 | 0.7120 | 0.7844 | 0.7464 | 167 | 0.9306 | 0.9781 | 0.9537 | 137 | 0.8033 | 0.8518 | 0.8268 | 0.9644 |
| 0.1601 | 9.0 | 864 | 0.1165 | 0.7980 | 0.8404 | 0.8187 | 94 | 0.7351 | 0.8144 | 0.7727 | 167 | 0.9306 | 0.9781 | 0.9537 | 137 | 0.8154 | 0.8769 | 0.8450 | 0.9671 |
| 0.1406 | 10.0 | 960 | 0.1041 | 0.7573 | 0.8298 | 0.7919 | 94 | 0.7816 | 0.8144 | 0.7977 | 167 | 0.9371 | 0.9781 | 0.9571 | 137 | 0.8286 | 0.8744 | 0.8509 | 0.9693 |
| 0.1283 | 11.0 | 1056 | 0.0951 | 0.8021 | 0.8191 | 0.8105 | 94 | 0.7865 | 0.8383 | 0.8116 | 167 | 0.9371 | 0.9781 | 0.9571 | 137 | 0.8417 | 0.8819 | 0.8613 | 0.9704 |
| 0.1229 | 12.0 | 1152 | 0.0895 | 0.8019 | 0.9043 | 0.8500 | 94 | 0.8 | 0.8383 | 0.8187 | 167 | 0.9375 | 0.9854 | 0.9609 | 137 | 0.8471 | 0.9045 | 0.8748 | 0.9715 |
| 0.1116 | 13.0 | 1248 | 0.0831 | 0.83 | 0.8830 | 0.8557 | 94 | 0.8314 | 0.8563 | 0.8437 | 167 | 0.9371 | 0.9781 | 0.9571 | 137 | 0.8675 | 0.9045 | 0.8856 | 0.9743 |
| 0.1077 | 14.0 | 1344 | 0.0769 | 0.8571 | 0.8936 | 0.875 | 94 | 0.8409 | 0.8862 | 0.8630 | 167 | 0.9504 | 0.9781 | 0.9640 | 137 | 0.8819 | 0.9196 | 0.9004 | 0.9760 |
| 0.1045 | 15.0 | 1440 | 0.0758 | 0.8333 | 0.9043 | 0.8673 | 94 | 0.8430 | 0.8683 | 0.8555 | 167 | 0.9371 | 0.9781 | 0.9571 | 137 | 0.8729 | 0.9146 | 0.8933 | 0.9760 |
| 0.1 | 16.0 | 1536 | 0.0753 | 0.8365 | 0.9255 | 0.8788 | 94 | 0.8111 | 0.8743 | 0.8415 | 167 | 0.9437 | 0.9781 | 0.9606 | 137 | 0.8615 | 0.9221 | 0.8908 | 0.9746 |
| 0.0961 | 17.0 | 1632 | 0.0690 | 0.8586 | 0.9043 | 0.8808 | 94 | 0.8563 | 0.8922 | 0.8739 | 167 | 0.9571 | 0.9781 | 0.9675 | 137 | 0.8910 | 0.9246 | 0.9075 | 0.9785 |
| 0.0981 | 18.0 | 1728 | 0.0676 | 0.86 | 0.9149 | 0.8866 | 94 | 0.8523 | 0.8982 | 0.8746 | 167 | 0.9504 | 0.9781 | 0.9640 | 137 | 0.8873 | 0.9296 | 0.9080 | 0.9782 |
| 0.0916 | 19.0 | 1824 | 0.0653 | 0.8333 | 0.9043 | 0.8673 | 94 | 0.8647 | 0.8802 | 0.8724 | 167 | 0.9640 | 0.9781 | 0.9710 | 137 | 0.8905 | 0.9196 | 0.9048 | 0.9790 |
| 0.0899 | 20.0 | 1920 | 0.0637 | 0.8586 | 0.9043 | 0.8808 | 94 | 0.8563 | 0.8922 | 0.8739 | 167 | 0.9640 | 0.9781 | 0.9710 | 137 | 0.8932 | 0.9246 | 0.9086 | 0.9790 |
| 0.0856 | 21.0 | 2016 | 0.0656 | 0.8113 | 0.9149 | 0.8600 | 94 | 0.8580 | 0.8683 | 0.8631 | 167 | 0.9571 | 0.9781 | 0.9675 | 137 | 0.8795 | 0.9171 | 0.8979 | 0.9773 |
| 0.0844 | 22.0 | 2112 | 0.0621 | 0.8416 | 0.9043 | 0.8718 | 94 | 0.8563 | 0.8922 | 0.8739 | 167 | 0.9571 | 0.9781 | 0.9675 | 137 | 0.8867 | 0.9246 | 0.9053 | 0.9782 |
| 0.0816 | 23.0 | 2208 | 0.0608 | 0.85 | 0.9043 | 0.8763 | 94 | 0.8647 | 0.8802 | 0.8724 | 167 | 0.9571 | 0.9781 | 0.9675 | 137 | 0.8927 | 0.9196 | 0.9059 | 0.9798 |
| 0.0803 | 24.0 | 2304 | 0.0591 | 0.8586 | 0.9043 | 0.8808 | 94 | 0.8671 | 0.8982 | 0.8824 | 167 | 0.9571 | 0.9781 | 0.9675 | 137 | 0.8956 | 0.9271 | 0.9111 | 0.9796 |
| 0.0793 | 25.0 | 2400 | 0.0577 | 0.85 | 0.9043 | 0.8763 | 94 | 0.8824 | 0.8982 | 0.8902 | 167 | 0.9710 | 0.9781 | 0.9745 | 137 | 0.9044 | 0.9271 | 0.9156 | 0.9818 |
| 0.0744 | 26.0 | 2496 | 0.0576 | 0.8529 | 0.9255 | 0.8878 | 94 | 0.8706 | 0.8862 | 0.8783 | 167 | 0.9710 | 0.9781 | 0.9745 | 137 | 0.9 | 0.9271 | 0.9134 | 0.9818 |
| 0.0761 | 27.0 | 2592 | 0.0571 | 0.8416 | 0.9043 | 0.8718 | 94 | 0.8757 | 0.8862 | 0.8810 | 167 | 0.9640 | 0.9781 | 0.9710 | 137 | 0.8973 | 0.9221 | 0.9095 | 0.9807 |
| 0.0724 | 28.0 | 2688 | 0.0559 | 0.8586 | 0.9043 | 0.8808 | 94 | 0.8655 | 0.8862 | 0.8757 | 167 | 0.9710 | 0.9781 | 0.9745 | 137 | 0.8995 | 0.9221 | 0.9107 | 0.9809 |
| 0.071 | 29.0 | 2784 | 0.0542 | 0.8687 | 0.9149 | 0.8912 | 94 | 0.8655 | 0.8862 | 0.8757 | 167 | 0.9783 | 0.9854 | 0.9818 | 137 | 0.9044 | 0.9271 | 0.9156 | 0.9818 |
| 0.0705 | 30.0 | 2880 | 0.0549 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8690 | 0.8743 | 0.8716 | 167 | 0.9854 | 0.9854 | 0.9854 | 137 | 0.9022 | 0.9271 | 0.9145 | 0.9818 |
| 0.0702 | 31.0 | 2976 | 0.0517 | 0.8687 | 0.9149 | 0.8912 | 94 | 0.8817 | 0.8922 | 0.8869 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9181 | 0.9296 | 0.9238 | 0.9834 |
| 0.065 | 32.0 | 3072 | 0.0532 | 0.8396 | 0.9468 | 0.89 | 94 | 0.8951 | 0.8683 | 0.8815 | 167 | 0.9926 | 0.9854 | 0.9890 | 137 | 0.9134 | 0.9271 | 0.9202 | 0.9826 |
| 0.0639 | 33.0 | 3168 | 0.0533 | 0.8286 | 0.9255 | 0.8744 | 94 | 0.8780 | 0.8623 | 0.8701 | 167 | 0.9926 | 0.9854 | 0.9890 | 137 | 0.9037 | 0.9196 | 0.9116 | 0.9815 |
| 0.0642 | 34.0 | 3264 | 0.0520 | 0.8529 | 0.9255 | 0.8878 | 94 | 0.875 | 0.8802 | 0.8776 | 167 | 0.9926 | 0.9854 | 0.9890 | 137 | 0.9089 | 0.9271 | 0.9179 | 0.9820 |
| 0.0652 | 35.0 | 3360 | 0.0518 | 0.8515 | 0.9149 | 0.8821 | 94 | 0.8690 | 0.8743 | 0.8716 | 167 | 0.9926 | 0.9854 | 0.9890 | 137 | 0.9062 | 0.9221 | 0.9141 | 0.9815 |
| 0.0627 | 36.0 | 3456 | 0.0533 | 0.87 | 0.9255 | 0.8969 | 94 | 0.8655 | 0.8862 | 0.8757 | 167 | 0.9854 | 0.9854 | 0.9854 | 137 | 0.9069 | 0.9296 | 0.9181 | 0.9818 |
| 0.0606 | 37.0 | 3552 | 0.0503 | 0.8878 | 0.9255 | 0.9062 | 94 | 0.8698 | 0.8802 | 0.8750 | 167 | 0.9926 | 0.9854 | 0.9890 | 137 | 0.9156 | 0.9271 | 0.9213 | 0.9826 |
| 0.0611 | 38.0 | 3648 | 0.0497 | 0.87 | 0.9255 | 0.8969 | 94 | 0.8848 | 0.8743 | 0.8795 | 167 | 0.9854 | 0.9854 | 0.9854 | 137 | 0.9154 | 0.9246 | 0.92 | 0.9829 |
| 0.0645 | 39.0 | 3744 | 0.0511 | 0.8431 | 0.9149 | 0.8776 | 94 | 0.8780 | 0.8623 | 0.8701 | 167 | 0.9926 | 0.9854 | 0.9890 | 137 | 0.9080 | 0.9171 | 0.9125 | 0.9823 |
| 0.061 | 40.0 | 3840 | 0.0487 | 0.8687 | 0.9149 | 0.8912 | 94 | 0.8765 | 0.8922 | 0.8843 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9158 | 0.9296 | 0.9227 | 0.9840 |
| 0.0591 | 41.0 | 3936 | 0.0491 | 0.8515 | 0.9149 | 0.8821 | 94 | 0.8802 | 0.8802 | 0.8802 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9132 | 0.9246 | 0.9189 | 0.9834 |
| 0.058 | 42.0 | 4032 | 0.0480 | 0.8687 | 0.9149 | 0.8912 | 94 | 0.8757 | 0.8862 | 0.8810 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9156 | 0.9271 | 0.9213 | 0.9840 |
| 0.0587 | 43.0 | 4128 | 0.0494 | 0.8350 | 0.9149 | 0.8731 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9055 | 0.9146 | 0.91 | 0.9820 |
| 0.0562 | 44.0 | 4224 | 0.0482 | 0.8515 | 0.9149 | 0.8821 | 94 | 0.8788 | 0.8683 | 0.8735 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9127 | 0.9196 | 0.9161 | 0.9829 |
| 0.0565 | 45.0 | 4320 | 0.0471 | 0.8529 | 0.9255 | 0.8878 | 94 | 0.8795 | 0.8743 | 0.8769 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9132 | 0.9246 | 0.9189 | 0.9837 |
| 0.0541 | 46.0 | 4416 | 0.0482 | 0.8365 | 0.9255 | 0.8788 | 94 | 0.8795 | 0.8743 | 0.8769 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9086 | 0.9246 | 0.9166 | 0.9831 |
| 0.0547 | 47.0 | 4512 | 0.0487 | 0.8350 | 0.9149 | 0.8731 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9055 | 0.9146 | 0.91 | 0.9823 |
| 0.0537 | 48.0 | 4608 | 0.0480 | 0.8269 | 0.9149 | 0.8687 | 94 | 0.8659 | 0.8503 | 0.8580 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9007 | 0.9121 | 0.9064 | 0.9829 |
| 0.0525 | 49.0 | 4704 | 0.0477 | 0.8416 | 0.9043 | 0.8718 | 94 | 0.8882 | 0.8563 | 0.8720 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9144 | 0.9121 | 0.9132 | 0.9826 |
| 0.0513 | 50.0 | 4800 | 0.0472 | 0.86 | 0.9149 | 0.8866 | 94 | 0.8596 | 0.8802 | 0.8698 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9064 | 0.9246 | 0.9154 | 0.9845 |
| 0.0507 | 51.0 | 4896 | 0.0481 | 0.8286 | 0.9255 | 0.8744 | 94 | 0.875 | 0.8383 | 0.8563 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.905 | 0.9095 | 0.9073 | 0.9820 |
| 0.0499 | 52.0 | 4992 | 0.0472 | 0.87 | 0.9255 | 0.8969 | 94 | 0.8757 | 0.8862 | 0.8810 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9158 | 0.9296 | 0.9227 | 0.9837 |
| 0.0519 | 53.0 | 5088 | 0.0471 | 0.8614 | 0.9255 | 0.8923 | 94 | 0.8743 | 0.8743 | 0.8743 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9132 | 0.9246 | 0.9189 | 0.9840 |
| 0.0523 | 54.0 | 5184 | 0.0483 | 0.8286 | 0.9255 | 0.8744 | 94 | 0.8545 | 0.8443 | 0.8494 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.8963 | 0.9121 | 0.9041 | 0.9826 |
| 0.0507 | 55.0 | 5280 | 0.0465 | 0.8447 | 0.9255 | 0.8832 | 94 | 0.8614 | 0.8563 | 0.8589 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9035 | 0.9171 | 0.9102 | 0.9831 |
| 0.0506 | 56.0 | 5376 | 0.0465 | 0.8447 | 0.9255 | 0.8832 | 94 | 0.8614 | 0.8563 | 0.8589 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9035 | 0.9171 | 0.9102 | 0.9831 |
| 0.0504 | 57.0 | 5472 | 0.0475 | 0.8208 | 0.9255 | 0.8700 | 94 | 0.8452 | 0.8503 | 0.8478 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.8900 | 0.9146 | 0.9021 | 0.9831 |
| 0.0484 | 58.0 | 5568 | 0.0462 | 0.8302 | 0.9362 | 0.88 | 94 | 0.8659 | 0.8503 | 0.8580 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9012 | 0.9171 | 0.9091 | 0.9837 |
| 0.0487 | 59.0 | 5664 | 0.0457 | 0.8447 | 0.9255 | 0.8832 | 94 | 0.8727 | 0.8623 | 0.8675 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9082 | 0.9196 | 0.9139 | 0.9837 |
| 0.0463 | 60.0 | 5760 | 0.0475 | 0.8365 | 0.9255 | 0.8788 | 94 | 0.8623 | 0.8623 | 0.8623 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9015 | 0.9196 | 0.9104 | 0.9848 |
| 0.0462 | 61.0 | 5856 | 0.0469 | 0.8529 | 0.9255 | 0.8878 | 94 | 0.8655 | 0.8862 | 0.8757 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9069 | 0.9296 | 0.9181 | 0.9848 |
| 0.0497 | 62.0 | 5952 | 0.0469 | 0.8544 | 0.9362 | 0.8934 | 94 | 0.8521 | 0.8623 | 0.8571 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9017 | 0.9221 | 0.9118 | 0.9845 |
| 0.0465 | 63.0 | 6048 | 0.0469 | 0.8515 | 0.9149 | 0.8821 | 94 | 0.8683 | 0.8683 | 0.8683 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9082 | 0.9196 | 0.9139 | 0.9848 |
| 0.0468 | 64.0 | 6144 | 0.0470 | 0.86 | 0.9149 | 0.8866 | 94 | 0.8841 | 0.8683 | 0.8761 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9173 | 0.9196 | 0.9184 | 0.9843 |
| 0.0455 | 65.0 | 6240 | 0.0467 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8675 | 0.8623 | 0.8649 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9062 | 0.9221 | 0.9141 | 0.9845 |
| 0.0456 | 66.0 | 6336 | 0.0463 | 0.8431 | 0.9149 | 0.8776 | 94 | 0.8712 | 0.8503 | 0.8606 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9075 | 0.9121 | 0.9098 | 0.9834 |
| 0.0436 | 67.0 | 6432 | 0.0457 | 0.8365 | 0.9255 | 0.8788 | 94 | 0.8773 | 0.8563 | 0.8667 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9080 | 0.9171 | 0.9125 | 0.9837 |
| 0.0442 | 68.0 | 6528 | 0.0464 | 0.8365 | 0.9255 | 0.8788 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9057 | 0.9171 | 0.9114 | 0.9837 |
| 0.0463 | 69.0 | 6624 | 0.0463 | 0.8447 | 0.9255 | 0.8832 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9080 | 0.9171 | 0.9125 | 0.9840 |
| 0.0445 | 70.0 | 6720 | 0.0457 | 0.8529 | 0.9255 | 0.8878 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9102 | 0.9171 | 0.9136 | 0.9840 |
| 0.0456 | 71.0 | 6816 | 0.0474 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8788 | 0.8683 | 0.8735 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9109 | 0.9246 | 0.9177 | 0.9851 |
| 0.0473 | 72.0 | 6912 | 0.0479 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8659 | 0.8503 | 0.8580 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9035 | 0.9171 | 0.9102 | 0.9837 |
| 0.0434 | 73.0 | 7008 | 0.0475 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8712 | 0.8503 | 0.8606 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9057 | 0.9171 | 0.9114 | 0.9840 |
| 0.042 | 74.0 | 7104 | 0.0463 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8765 | 0.8503 | 0.8632 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9102 | 0.9171 | 0.9136 | 0.9837 |
| 0.0438 | 75.0 | 7200 | 0.0463 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8765 | 0.8503 | 0.8632 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9102 | 0.9171 | 0.9136 | 0.9837 |
| 0.0437 | 76.0 | 7296 | 0.0459 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8623 | 0.8623 | 0.8623 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9039 | 0.9221 | 0.9129 | 0.9843 |
| 0.0455 | 77.0 | 7392 | 0.0469 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8827 | 0.8563 | 0.8693 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9104 | 0.9196 | 0.9150 | 0.9840 |
| 0.0426 | 78.0 | 7488 | 0.0467 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8727 | 0.8623 | 0.8675 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9062 | 0.9221 | 0.9141 | 0.9848 |
| 0.043 | 79.0 | 7584 | 0.0457 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8735 | 0.8683 | 0.8709 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9064 | 0.9246 | 0.9154 | 0.9854 |
| 0.0435 | 80.0 | 7680 | 0.0462 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8727 | 0.8623 | 0.8675 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9062 | 0.9221 | 0.9141 | 0.9851 |
| 0.0411 | 81.0 | 7776 | 0.0461 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8606 | 0.8503 | 0.8554 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9012 | 0.9171 | 0.9091 | 0.9843 |
| 0.0421 | 82.0 | 7872 | 0.0458 | 0.8544 | 0.9362 | 0.8934 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9104 | 0.9196 | 0.9150 | 0.9843 |
| 0.0416 | 83.0 | 7968 | 0.0462 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8773 | 0.8563 | 0.8667 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9082 | 0.9196 | 0.9139 | 0.9843 |
| 0.0412 | 84.0 | 8064 | 0.0461 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8788 | 0.8683 | 0.8735 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9109 | 0.9246 | 0.9177 | 0.9851 |
| 0.0428 | 85.0 | 8160 | 0.0465 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8773 | 0.8563 | 0.8667 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9104 | 0.9196 | 0.9150 | 0.9845 |
| 0.0434 | 86.0 | 8256 | 0.0467 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9059 | 0.9196 | 0.9127 | 0.9840 |
| 0.0411 | 87.0 | 8352 | 0.0466 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9059 | 0.9196 | 0.9127 | 0.9840 |
| 0.0436 | 88.0 | 8448 | 0.0467 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8780 | 0.8623 | 0.8701 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9084 | 0.9221 | 0.9152 | 0.9848 |
| 0.0413 | 89.0 | 8544 | 0.0460 | 0.8544 | 0.9362 | 0.8934 | 94 | 0.8795 | 0.8743 | 0.8769 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9134 | 0.9271 | 0.9202 | 0.9854 |
| 0.0401 | 90.0 | 8640 | 0.0467 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8675 | 0.8623 | 0.8649 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9062 | 0.9221 | 0.9141 | 0.9848 |
| 0.0421 | 91.0 | 8736 | 0.0467 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8780 | 0.8623 | 0.8701 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9107 | 0.9221 | 0.9164 | 0.9845 |
| 0.0407 | 92.0 | 8832 | 0.0462 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8773 | 0.8563 | 0.8667 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9104 | 0.9196 | 0.9150 | 0.9845 |
| 0.0449 | 93.0 | 8928 | 0.0463 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8773 | 0.8563 | 0.8667 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9104 | 0.9196 | 0.9150 | 0.9845 |
| 0.0397 | 94.0 | 9024 | 0.0462 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8667 | 0.8563 | 0.8614 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9037 | 0.9196 | 0.9116 | 0.9845 |
| 0.0417 | 95.0 | 9120 | 0.0463 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8667 | 0.8563 | 0.8614 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9037 | 0.9196 | 0.9116 | 0.9845 |
| 0.0402 | 96.0 | 9216 | 0.0465 | 0.8381 | 0.9362 | 0.8844 | 94 | 0.8780 | 0.8623 | 0.8701 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9084 | 0.9221 | 0.9152 | 0.9848 |
| 0.0422 | 97.0 | 9312 | 0.0464 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9082 | 0.9196 | 0.9139 | 0.9851 |
| 0.0417 | 98.0 | 9408 | 0.0463 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8720 | 0.8563 | 0.8640 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9082 | 0.9196 | 0.9139 | 0.9851 |
| 0.0409 | 99.0 | 9504 | 0.0463 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8667 | 0.8563 | 0.8614 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9059 | 0.9196 | 0.9127 | 0.9848 |
| 0.0404 | 100.0 | 9600 | 0.0463 | 0.8462 | 0.9362 | 0.8889 | 94 | 0.8667 | 0.8563 | 0.8614 | 167 | 1.0 | 0.9854 | 0.9926 | 137 | 0.9059 | 0.9196 | 0.9127 | 0.9848 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
ramnathv/openhermes-mistral-dpo-gptq | ramnathv | "2023-12-21T19:01:16Z" | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"base_model:adapter:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"license:apache-2.0",
"region:us"
] | null | "2023-12-21T18:53:36Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ
model-index:
- name: openhermes-mistral-dpo-gptq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openhermes-mistral-dpo-gptq
This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6173
- Rewards/chosen: 12.3662
- Rewards/rejected: 8.3268
- Rewards/accuracies: 0.875
- Rewards/margins: 4.0394
- Logps/rejected: -284.7693
- Logps/chosen: -270.0549
- Logits/rejected: -2.4409
- Logits/chosen: -2.6955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6874 | 0.01 | 10 | 0.6396 | 0.0822 | -0.0086 | 0.5625 | 0.0908 | -368.1237 | -392.8953 | -2.2951 | -2.4730 |
| 0.7988 | 0.01 | 20 | 0.5694 | 1.1019 | 0.5787 | 0.875 | 0.5232 | -362.2504 | -382.6982 | -2.2767 | -2.5084 |
| 0.6368 | 0.01 | 30 | 0.5572 | 11.3452 | 7.2855 | 0.875 | 4.0597 | -295.1821 | -280.2652 | -2.4358 | -2.6872 |
| 1.6793 | 0.02 | 40 | 0.5216 | 11.9759 | 7.8540 | 0.9375 | 4.1220 | -289.4976 | -273.9581 | -2.4389 | -2.6947 |
| 4.9001 | 0.03 | 50 | 0.6173 | 12.3662 | 8.3268 | 0.875 | 4.0394 | -284.7693 | -270.0549 | -2.4409 | -2.6955 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0 |
mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-GGUF | mradermacher | "2025-03-12T01:33:35Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"model_stock",
"ZeroXClem-Llama-3.1-8B-AthenaSky-MegaMix",
"en",
"base_model:ZeroXClem/Llama-3.1-8B-AthenaSky-MegaMix",
"base_model:quantized:ZeroXClem/Llama-3.1-8B-AthenaSky-MegaMix",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-11T16:22:38Z" | ---
base_model: ZeroXClem/Llama-3.1-8B-AthenaSky-MegaMix
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- model_stock
- ZeroXClem-Llama-3.1-8B-AthenaSky-MegaMix
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ZeroXClem/Llama-3.1-8B-AthenaSky-MegaMix
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-GGUF/resolve/main/Llama-3.1-8B-AthenaSky-MegaMix.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-GGUF/resolve/main/Llama-3.1-8B-AthenaSky-MegaMix.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-GGUF/resolve/main/Llama-3.1-8B-AthenaSky-MegaMix.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-GGUF/resolve/main/Llama-3.1-8B-AthenaSky-MegaMix.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-GGUF/resolve/main/Llama-3.1-8B-AthenaSky-MegaMix.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-GGUF/resolve/main/Llama-3.1-8B-AthenaSky-MegaMix.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-GGUF/resolve/main/Llama-3.1-8B-AthenaSky-MegaMix.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-GGUF/resolve/main/Llama-3.1-8B-AthenaSky-MegaMix.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-GGUF/resolve/main/Llama-3.1-8B-AthenaSky-MegaMix.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-GGUF/resolve/main/Llama-3.1-8B-AthenaSky-MegaMix.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-GGUF/resolve/main/Llama-3.1-8B-AthenaSky-MegaMix.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-AthenaSky-MegaMix-GGUF/resolve/main/Llama-3.1-8B-AthenaSky-MegaMix.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Zekunli/qwen2.5-7b-alpaca-selection-cot | Zekunli | "2024-11-12T22:17:22Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-12T21:48:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso/4e2c54f8-7607-497c-9029-0fd55daa3598 | lesso | "2025-02-08T23:48:16Z" | 11 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-PhiForCausalLM",
"base_model:adapter:echarlaix/tiny-random-PhiForCausalLM",
"license:apache-2.0",
"region:us"
] | null | "2025-02-08T01:40:28Z" | ---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-PhiForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4e2c54f8-7607-497c-9029-0fd55daa3598
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 4e2c54f8-7607-497c-9029-0fd55daa3598
This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000202
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 50
- training_steps: 310
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0097 | 1 | 6.9341 |
| 6.9004 | 0.4843 | 50 | 6.8755 |
| 6.866 | 0.9685 | 100 | 6.8579 |
| 6.9608 | 1.4528 | 150 | 6.8503 |
| 6.8475 | 1.9370 | 200 | 6.8437 |
| 6.9513 | 2.4213 | 250 | 6.8380 |
| 6.8392 | 2.9056 | 300 | 6.8322 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ISTA-DASLab/Llama-2-7b-AQLM-2Bit-8x8-hf | ISTA-DASLab | "2024-03-11T20:54:17Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2401.06118",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"aqlm",
"region:us"
] | text-generation | "2024-01-30T17:23:36Z" | Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of `meta-llama/Llama-2-7b-hf`.
For this quantization, we used 2 codebooks of 8 bits.
Selected evaluation results for this and other models:
| Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
|------------|-------------|----------------|----------------|--------------------------------------------------------------------------|
| Llama-2-7b | 1x16 | 5.92 | 2.4 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-1x16-hf) |
| Llama-2-7b | 2x8 | 6.69 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-2x8-hf) |
| Llama-2-7b (THIS) | 8x8 | 6.61 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-2Bit-8x8-hf) |
| Llama-2-13b| 1x16 | 5.22 | 4.1 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-13b-AQLM-2Bit-1x16-hf)|
| Llama-2-70b| 1x16 | 3.83 | 18.8 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-2Bit-1x16-hf)|
| Llama-2-70b| 2x8 | 4.21 | 18.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-2Bit-2x8-hf) |
| Mixtral-8x7b| 1x16 | 3.35 | 12.6 | [Link](https://huggingface.co/ISTA-DASLab/Mixtral-8x7b-AQLM-2Bit-1x16-hf)|
| Mixtral-8x7b-Instruct| 1x16 | - | 12.6 | [Link](https://huggingface.co/ISTA-DASLab/Mixtral-8x7B-Instruct-v0_1-AQLM-2Bit-1x16-hf)|
To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM). |
JaehyeokLee/preliminary_efficient_batching_random_gist_checkpoint_epoch_1_step_40 | JaehyeokLee | "2025-02-18T10:49:49Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"arxiv:2402.03216",
"arxiv:2004.04906",
"arxiv:2106.14807",
"arxiv:2107.05720",
"arxiv:2004.12832",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-02-18T10:45:38Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
license: mit
---
For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
# BGE-M3 ([paper](https://arxiv.org/pdf/2402.03216.pdf), [code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3))
In this project, we introduce BGE-M3, which is distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
- Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
- Multi-Linguality: It can support more than 100 working languages.
- Multi-Granularity: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to 8192 tokens.
**Some suggestions for retrieval pipeline in RAG:**
We recommend to use following pipeline: hybrid retrieval + re-ranking.
- Hybrid retrieval leverages the strengths of various methods, offering higher accuracy and stronger generalization capabilities.
A classic example: using both embedding retrieval and the BM25 algorithm.
Now, you can try to use BGE-M3, which supports both embedding and sparse retrieval.
This allows you to obtain token weights (similar to the BM25) without any additional cost when generate dense embeddings.
- As cross-encoder models, re-ranker demonstrates higher accuracy than bi-encoder embedding model.
Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker), [cohere-reranker](https://txt.cohere.com/rerank/)) after retrieval can further filter the selected text.
## News:
- 2/6/2024: We release the [MLDR](https://huggingface.co/datasets/Shitao/MLDR) (a long document retrieval dataset covering 13 languages) and [evaluation pipeline](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR).
- 2/1/2024: **Thanks for the excellent tool from Vespa.** You can easily use multiple modes of BGE-M3 following this [notebook](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb)
## Specs
- Model
| Model Name | Dimension | Sequence Length | Introduction |
|:----:|:---:|:---:|:---:|
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | 1024 | 8192 | multilingual; unified fine-tuning (dense, sparse, and colbert) from bge-m3-unsupervised|
| [BAAI/bge-m3-unsupervised](https://huggingface.co/BAAI/bge-m3-unsupervised) | 1024 | 8192 | multilingual; contrastive learning from bge-m3-retromae |
| [BAAI/bge-m3-retromae](https://huggingface.co/BAAI/bge-m3-retromae) | -- | 8192 | multilingual; extend the max_length of [xlm-roberta](https://huggingface.co/FacebookAI/xlm-roberta-large) to 8192 and further pretrained via [retromae](https://github.com/staoxiao/RetroMAE)|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | English model |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | English model |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | English model |
- Data
| Dataset | Introduction |
|:----:|:---:|
| [MLDR](https://huggingface.co/datasets/Shitao/MLDR) | Docuemtn Retrieval Dataset, covering 13 languages|
## FAQ
**1. Introduction for different retrieval methods**
- Dense retrieval: map the text into a single embedding, e.g., [DPR](https://arxiv.org/abs/2004.04906), [BGE-v1.5](https://github.com/FlagOpen/FlagEmbedding)
- Sparse retrieval (lexical matching): a vector of size equal to the vocabulary, with the majority of positions set to zero, calculating a weight only for tokens present in the text. e.g., BM25, [unicoil](https://arxiv.org/pdf/2106.14807.pdf), and [splade](https://arxiv.org/abs/2107.05720)
- Multi-vector retrieval: use multiple vectors to represent a text, e.g., [ColBERT](https://arxiv.org/abs/2004.12832).
**2. Comparison with BGE-v1.5 and other monolingual models**
BGE-M3 is a multilingual model, and its ability in monolingual embedding retrieval may not surpass models specifically designed for single languages.
However, we still recommend trying BGE-M3 because of its versatility (support for multiple languages and long texts).
Moreover, it can simultaneously generate multiple representations, and using them together can enhance accuracy and generalization,
unlike most existing models that can only perform dense retrieval.
In the open-source community, there are many excellent models (e.g., jina-embedding, colbert, e5, etc),
and users can choose a model that suits their specific needs based on practical considerations,
such as whether to require multilingual or cross-language support, and whether to process long texts.
**3. How to use BGE-M3 in other projects?**
For embedding retrieval, you can employ the BGE-M3 model using the same approach as BGE.
The only difference is that the BGE-M3 model no longer requires adding instructions to the queries.
For sparse retrieval methods, most open-source libraries currently do not support direct utilization of the BGE-M3 model.
Contributions from the community are welcome.
In our experiments, we use [Pyserini](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR#hybrid-retrieval-dense--sparse) and Faiss to do hybrid retrieval.
**Now you can ou can try the hybrid mode of BGE-M3 in [Vespa](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb
). Thanks @jobergum.**
**4. How to fine-tune bge-M3 model?**
You can follow the common in this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune)
to fine-tune the dense embedding.
Our code and data for unified fine-tuning (dense, sparse, and multi-vectors) will be released.
## Usage
Install:
```
git clone https://github.com/FlagOpen/FlagEmbedding.git
cd FlagEmbedding
pip install -e .
```
or:
```
pip install -U FlagEmbedding
```
### Generate Embedding for text
- Dense Embedding
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3',
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
embeddings_1 = model.encode(sentences_1,
batch_size=12,
max_length=8192, # If you don't need such a long length, you can set a smaller value to speed up the encoding process.
)['dense_vecs']
embeddings_2 = model.encode(sentences_2)['dense_vecs']
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# [[0.6265, 0.3477], [0.3499, 0.678 ]]
```
You also can use sentence-transformers and huggingface transformers to generate dense embeddings.
Refer to [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding#usage) for details.
- Sparse Embedding (Lexical Weight)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=False)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=False)
# you can see the weight for each token:
print(model.convert_id_to_token(output_1['lexical_weights']))
# [{'What': 0.08356, 'is': 0.0814, 'B': 0.1296, 'GE': 0.252, 'M': 0.1702, '3': 0.2695, '?': 0.04092},
# {'De': 0.05005, 'fin': 0.1368, 'ation': 0.04498, 'of': 0.0633, 'BM': 0.2515, '25': 0.3335}]
# compute the scores via lexical mathcing
lexical_scores = model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_2['lexical_weights'][0])
print(lexical_scores)
# 0.19554901123046875
print(model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_1['lexical_weights'][1]))
# 0.0
```
- Multi-Vector (ColBERT)
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=True)
output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=True)
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][0]))
print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][1]))
# 0.7797
# 0.4620
```
### Compute score for text pairs
Input a list of text pairs, you can get the scores computed by different methods.
```python
from FlagEmbedding import BGEM3FlagModel
model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
sentence_pairs = [[i,j] for i in sentences_1 for j in sentences_2]
print(model.compute_score(sentence_pairs,
max_passage_length=128, # a smaller max length leads to a lower latency
weights_for_different_modes=[0.4, 0.2, 0.4])) # weights_for_different_modes(w) is used to do weighted sum: w[0]*dense_score + w[1]*sparse_score + w[2]*colbert_score
# {
# 'colbert': [0.7796499729156494, 0.4621465802192688, 0.4523794651031494, 0.7898575067520142],
# 'sparse': [0.195556640625, 0.00879669189453125, 0.0, 0.1802978515625],
# 'dense': [0.6259765625, 0.347412109375, 0.349853515625, 0.67822265625],
# 'sparse+dense': [0.482503205537796, 0.23454029858112335, 0.2332356721162796, 0.5122477412223816],
# 'colbert+sparse+dense': [0.6013619303703308, 0.3255828022956848, 0.32089319825172424, 0.6232916116714478]
# }
```
## Evaluation
- Multilingual (Miracl dataset)

- Cross-lingual (MKQA dataset)

- Long Document Retrieval
- MLDR:

Please note that [MLDR](https://huggingface.co/datasets/Shitao/MLDR) is a document retrieval dataset we constructed via LLM,
covering 13 languages, including test set, validation set, and training set.
We utilized the training set from MLDR to enhance the model's long document retrieval capabilities.
Therefore, comparing baselines with `Dense w.o.long`(fine-tuning without long document dataset) is more equitable.
Additionally, this long document retrieval dataset will be open-sourced to address the current lack of open-source multilingual long text retrieval datasets.
We believe that this data will be helpful for the open-source community in training document retrieval models.
- NarritiveQA:

## Training
- Self-knowledge Distillation: combining multiple outputs from different
retrieval modes as reward signal to enhance the performance of single mode(especially for sparse retrieval and multi-vec(colbert) retrival)
- Efficient Batching: Improve the efficiency when fine-tuning on long text.
The small-batch strategy is simple but effective, which also can used to fine-tune large embedding model.
- MCLS: A simple method to improve the performance on long text without fine-tuning.
If you have no enough resource to fine-tuning model with long text, the method is useful.
Refer to our [report](https://arxiv.org/pdf/2402.03216.pdf) for more details.
**The fine-tuning codes and datasets will be open-sourced in the near future.**
## Acknowledgement
Thanks the authors of open-sourced datasets, including Miracl, MKQA, NarritiveQA, etc.
Thanks the open-sourced libraries like [Tevatron](https://github.com/texttron/tevatron), [pyserial](https://github.com/pyserial/pyserial).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge-m3,
title={BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation},
author={Jianlv Chen and Shitao Xiao and Peitian Zhang and Kun Luo and Defu Lian and Zheng Liu},
year={2024},
eprint={2402.03216},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
24aittl/control_inpaint | 24aittl | "2025-03-30T19:18:03Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"region:us"
] | null | "2025-03-30T18:14:55Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
nikoryagin/sae_Qwen_Qwen2.5-7B_resid_post_layer_25_size_16384_batchtopk_u9220oo9_lora_lgxmbr18 | nikoryagin | "2025-04-06T04:40:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"sae",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | "2025-04-06T04:39:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
John6666/cat-tower-noobai-xl-checkpoint-v10-sdxl | John6666 | "2024-12-23T06:48:33Z" | 138 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"2D",
"cute",
"girls",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.0",
"base_model:finetune:Laxhar/noobai-XL-1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-11-06T03:50:04Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- 2D
- cute
- girls
- illustrious
base_model:
- Laxhar/noobai-XL-1.0
- calculater/copycat-noob
- Raelina/Raehoshi-illust-XL
---
Original model is [here](https://civitai.com/models/920709/cat-tower-noobai-xl-checkpoint?modelVersionId=1030561).
This model created by [nuko_masshigura](https://civitai.com/user/nuko_masshigura).
|
Minata/plbart-base-finetuned-src_fm_fc-to-target | Minata | "2023-02-19T21:18:41Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"plbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-02-19T21:01:04Z" | ---
tags:
- generated_from_trainer
model-index:
- name: plbart-base-finetuned-src_fm_fc-to-target
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plbart-base-finetuned-src_fm_fc-to-target
This model is a fine-tuned version of [uclanlp/plbart-base](https://huggingface.co/uclanlp/plbart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8454 | 1.0 | 113 | 0.2344 |
| 0.2422 | 2.0 | 226 | 0.2051 |
| 0.2101 | 3.0 | 339 | 0.1974 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Treza12/Biomistral-Class4 | Treza12 | "2024-05-08T01:50:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-07T12:51:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/YamshadowInex12_Experiment28Experiment26-GGUF | mradermacher | "2024-12-26T23:22:11Z" | 13 | 0 | transformers | [
"transformers",
"gguf",
"Safetensors",
"text-generation-inference",
"merge",
"en",
"base_model:MaziyarPanahi/YamshadowInex12_Experiment28Experiment26",
"base_model:quantized:MaziyarPanahi/YamshadowInex12_Experiment28Experiment26",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-26T22:55:03Z" | ---
base_model: MaziyarPanahi/YamshadowInex12_Experiment28Experiment26
language:
- en
library_name: transformers
license: apache-2.0
model_creator: MaziyarPanahi
model_name: YamshadowInex12_Experiment28Experiment26
quantized_by: mradermacher
tags:
- Safetensors
- text-generation-inference
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MaziyarPanahi/YamshadowInex12_Experiment28Experiment26
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment28Experiment26-GGUF/resolve/main/YamshadowInex12_Experiment28Experiment26.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment28Experiment26-GGUF/resolve/main/YamshadowInex12_Experiment28Experiment26.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment28Experiment26-GGUF/resolve/main/YamshadowInex12_Experiment28Experiment26.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment28Experiment26-GGUF/resolve/main/YamshadowInex12_Experiment28Experiment26.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment28Experiment26-GGUF/resolve/main/YamshadowInex12_Experiment28Experiment26.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment28Experiment26-GGUF/resolve/main/YamshadowInex12_Experiment28Experiment26.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment28Experiment26-GGUF/resolve/main/YamshadowInex12_Experiment28Experiment26.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment28Experiment26-GGUF/resolve/main/YamshadowInex12_Experiment28Experiment26.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment28Experiment26-GGUF/resolve/main/YamshadowInex12_Experiment28Experiment26.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment28Experiment26-GGUF/resolve/main/YamshadowInex12_Experiment28Experiment26.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment28Experiment26-GGUF/resolve/main/YamshadowInex12_Experiment28Experiment26.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/YamshadowInex12_Experiment28Experiment26-GGUF/resolve/main/YamshadowInex12_Experiment28Experiment26.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ammardaffa/image_classification | ammardaffa | "2023-09-16T08:49:54Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-09-16T04:36:18Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3273
- Accuracy: 0.5375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.7704 | 0.3625 |
| No log | 2.0 | 80 | 1.4682 | 0.4938 |
| No log | 3.0 | 120 | 1.3937 | 0.4625 |
| No log | 4.0 | 160 | 1.3677 | 0.5125 |
| No log | 5.0 | 200 | 1.3114 | 0.525 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
video-portal-zacarias-mc-ph-e-emily-louise/Viral.portal.zacarias.mc.ph.e.emily.louise.mc.ph.full.video.link | video-portal-zacarias-mc-ph-e-emily-louise | "2025-04-12T18:11:50Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-12T18:10:37Z" | <animated-image data-catalyst=""><a href="https://tinyurl.com/5n6bjbnr?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
MC PH VIDEO} PORTAL ZACARIAS MC PH E EMILY LOUISE E MC PH MC PH E FERNANDA CAMPOS PORTAL ZACARIAS
PORTAL ZACARIAS MC PH E EMILY LOUISE E MC PH MC PH E FERNANDA CAMPOS PORTAL ZACARIAS VIDEO |
hungphongtrn/vi_en_mbart-large-50-many-to-many-mmt_doc_train | hungphongtrn | "2024-04-09T01:27:40Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-50-many-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-many-to-many-mmt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-07T03:44:01Z" | ---
base_model: facebook/mbart-large-50-many-to-many-mmt
tags:
- generated_from_trainer
model-index:
- name: vi_en_mbart-large-50-many-to-many-mmt_doc_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi_en_mbart-large-50-many-to-many-mmt_doc_train
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.37.2
- Pytorch 1.12.1+cu116
- Datasets 2.18.0
- Tokenizers 0.15.1
|
Aditya-Kuruva/llama2-qlora-finetunined-french | Aditya-Kuruva | "2023-10-17T12:18:10Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | "2023-10-17T12:18:04Z" | ---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
CuriousPotato/results_v3 | CuriousPotato | "2025-03-10T01:00:24Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"base_model:adapter:Salesforce/codet5-small",
"license:apache-2.0",
"region:us"
] | null | "2025-03-09T23:25:47Z" | ---
library_name: peft
license: apache-2.0
base_model: Salesforce/codet5-small
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: results_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_v3
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7503
- Accuracy: 0.6336
- Precision: 0.1607
- Recall: 0.9
- F1 Score: 0.2727
- F2 Score: 0.4688
- Gmean: 0.7419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 Score | F2 Score | Gmean |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:--------:|:--------:|:------:|
| No log | 1.0 | 57 | 0.9026 | 0.4809 | 0.0735 | 0.5 | 0.1282 | 0.2315 | 0.4896 |
| 0.7464 | 2.0 | 114 | 0.7800 | 0.5725 | 0.1034 | 0.6 | 0.1765 | 0.3061 | 0.5849 |
| 0.7464 | 3.0 | 171 | 0.7762 | 0.5878 | 0.1452 | 0.9 | 0.25 | 0.4412 | 0.7112 |
| 0.6653 | 4.0 | 228 | 0.8069 | 0.5725 | 0.1406 | 0.9 | 0.2432 | 0.4327 | 0.7006 |
| 0.6653 | 5.0 | 285 | 0.7910 | 0.5954 | 0.1475 | 0.9 | 0.2535 | 0.4455 | 0.7164 |
| 0.6437 | 6.0 | 342 | 0.7414 | 0.6412 | 0.1636 | 0.9 | 0.2769 | 0.4737 | 0.7469 |
| 0.6437 | 7.0 | 399 | 0.8038 | 0.5954 | 0.1475 | 0.9 | 0.2535 | 0.4455 | 0.7164 |
| 0.6328 | 8.0 | 456 | 0.6908 | 0.6794 | 0.18 | 0.9 | 0.3 | 0.5000 | 0.7714 |
| 0.5966 | 9.0 | 513 | 0.7782 | 0.6183 | 0.1552 | 0.9 | 0.2647 | 0.4592 | 0.7318 |
| 0.5966 | 10.0 | 570 | 0.7343 | 0.6565 | 0.1698 | 0.9 | 0.2857 | 0.4839 | 0.7568 |
| 0.5819 | 11.0 | 627 | 0.7569 | 0.6260 | 0.1579 | 0.9 | 0.2687 | 0.4639 | 0.7369 |
| 0.5819 | 12.0 | 684 | 0.7650 | 0.6183 | 0.1552 | 0.9 | 0.2647 | 0.4592 | 0.7318 |
| 0.5643 | 13.0 | 741 | 0.7711 | 0.6260 | 0.1579 | 0.9 | 0.2687 | 0.4639 | 0.7369 |
| 0.5643 | 14.0 | 798 | 0.7470 | 0.6336 | 0.1607 | 0.9 | 0.2727 | 0.4688 | 0.7419 |
| 0.569 | 15.0 | 855 | 0.7503 | 0.6336 | 0.1607 | 0.9 | 0.2727 | 0.4688 | 0.7419 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0 |
chatpdflocal/QwQ-32B-GGUF | chatpdflocal | "2025-03-06T16:45:04Z" | 0 | 1 | null | [
"gguf",
"legal",
"finance",
"PC",
"laptop",
"QwQ-32B",
"GGUF",
"small size",
"chatpdf",
"local",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-06T05:07:44Z" | ---
license: apache-2.0
tags:
- legal
- finance
- PC
- laptop
- QwQ-32B
- GGUF
- small size
- chatpdf
- local
---
# The QwQ-32B model family, developed by Alibaba, consists of lightweight, open-source models that demonstrate state-of-the-art performance across various tasks.
This repo is about QwQ-32B different size gguf models, which are very applicable for deploying and using in PCs, laptops or mobiles.
# If you are a Mac user, the following free wonderful AI tools can help you to read and understand PDFs effectively:
- If you are using Zotero for managing and reading your personal PDFs, [PapersGPT](https://www.papersgpt.com) is a free plugin which can assist you to chat PDFs effectively by your local gemma2.
- you can download the beautiful ChatPDFLocal MacOS app from [here](https://www.chatpdflocal.com), load one or batch PDF files at will, and quickly experience the effect of the model through chat reading. |
jtatman/phi3-mini-4k-persian-alpaca | jtatman | "2024-06-16T00:47:26Z" | 147 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:M4-ai/Orca-2.0-Tau-1.8B",
"base_model:finetune:M4-ai/Orca-2.0-Tau-1.8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-16T00:43:49Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
base_model: M4-ai/Orca-2.0-Tau-1.8B
---
# Uploaded model
- **Developed by:** jtatman
- **License:** apache-2.0
- **Finetuned from model :** M4-ai/Orca-2.0-Tau-1.8B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Judeco2025/Sirjude2024 | Judeco2025 | "2025-02-18T06:47:52Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-02-18T06:47:52Z" | ---
license: apache-2.0
---
|
dkalpakchi/SweCTRL-Mini | dkalpakchi | "2023-05-08T05:59:14Z" | 24 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"ctrl",
"text-generation",
"sv",
"dataset:mc4",
"arxiv:2304.13994",
"arxiv:1910.09700",
"arxiv:1909.05858",
"doi:10.57967/hf/0619",
"license:bigscience-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-05-08T03:38:29Z" | ---
license: bigscience-openrail-m
datasets:
- mc4
language:
- sv
library_name: transformers
inference:
parameters:
top_p: 0.9
repetition_penalty: 1.1
max_new_tokens: 75
do_sample: true
widget:
- text: ":nyheter:"
example_title: "News text"
- text: ":wiki:"
example_title: "Wikipedia text"
- text: ":blogg:"
example_title: "Blog post"
- text: ":forum:"
example_title: "Forum"
- text: ":anons:"
example_title: "Ads"
---
# SweCTRL-Mini
<!-- Provide a quick summary of what the model is/does. -->
SweCTRL-Mini is a large Swedish language model that can be used for inference and fine-tuning on a single consumer-grade GPU. The model is based on the CTRL architecture by Keskar, McCann, Varshney, Xiong, and Socher
(2019), which means that users of the SweCTRL-Mini model can control the genre of the generated text by inserting special tokens in the generation prompts.
Crucially, note that this model is:
- **NOT** trained on following GPT-like instructions,
- **NOT** trained for conversations, like ChatGPT,
- **NOT** trained on any multi-modal data during training. Only one modality -- text, more than 99% of it in Swedish.
**Note on using Inference API (text box to the right):** There are a number of presets that start the text with appropriate control codes to control the genre, e.g., `:wiki:` for
texts form Wikipedia. You can add your own prompt on top of these control codes. For instance, if you want a Wikipedia article about Stockholm, you could write
`:wiki: Stockholm`. The generation in the example is limited to 75 new tokens max. Also, normally the generation should stop after reaching the ending control code,
which has `$` symbol at the end, e.g., `:wiki:$` for Wikipedia texts, however I couldn't configure that here, so please ignore all text after such tokens if they were to be
generated. Additionaly, note, there are **no** filters or other mechanisms for making the text safe from biases or prohibiting it from generating texts on any topics.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Dmytro Kalpakchi (with supervision from Johan Boye)
- **Shared by:** Dmytro Kalpakchi
- **Model type:** Transformer-based language model trained by predicting the next token
- **Language(s) (NLP):** Swedish
- **License:** BigScience Open RAIL-M
- **Finetuned from model:** None, trained from scratch
### Model Sources
<!-- Provide the basic links for the model. -->
- **Website:** https://swectrl.dev/
- **Repository:** https://github.com/dkalpakchi/SweCTRL-Mini
- **Paper:** https://arxiv.org/pdf/2304.13994.pdf
- **Technical note:** https://zenodo.org/record/7868205
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model should be used for generating texts of various genres in Swedish.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Please refer to Appendix A of the License file for information of use restrictions. The model has a limited context window of 256 tokens, so it will most probably not work well
for text summarization. Additionally, vast majority of its training data was in Swedish, although it contains tokens in other languages as well, so tasks like
Machine Translation would require further fine-tuning.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
To mitigate the inclusion of personally-identifiable data we attempted to remove sources that could contain such data to the best of our ability (see Technical note for
more details on the data filtering process). However, we have still noted that the model can generate text that includes various forms of biases, which is why we strongly
recommend human curation of the generated texts. Currently we have conducted no systematic investigation on either the kinds of biases are included in the generated texts or how
frequently they occur. The contribution of the community on this matter would be very welcome.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
For further recommendations on the use of the model, please see the associated paper.
## How to Get Started with the Model
The fastest way to start with the model is using the code below:
```py
from transformers import pipeline
pipe = pipeline(model="dkalpakchi/SweCTRL-Mini")
print(pipe(":nyheter:", max_length=256, repetition_penalty=1.1, top_p=0.9))
```
For more advanced uses and other code examples, please see the associated GitHub repository (https://github.com/dkalpakchi/SweCTRL-Mini).
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The training data includes the *subset* of cleaned Swedish mC4, as well as some documents from Project Runeberg.
The extensive information on the training data is provided in the Section 1 of the Technical note.
The interface to partially mine training data is available at: https://swectrl.dev/data
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
See Section 1 of the Technical note.
#### Training Hyperparameters
- **Training regime:** fp32
## Evaluation
See Sections 5.3, 6, and 7 in the associated paper, and Section 3 of the Technical note.
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 8 A100 GPUs
- **Hours used:** 11907.6 GPU-hours for training and experimentation
- **Provider:** BerzeLiUs supercomputer
- **Carbon Emitted:** No public data on carbon efficiency, so hard to estimate
## Technical Specifications
See Section 3 of the associated paper
## Citation
**BibTeX:**
```bibtex
@article{kalpakchi2023swectrl,
title={SweCTRL-Mini: a data-transparent Transformer-based large language model for controllable text generation in Swedish},
author={Kalpakchi, Dmytro and Boye, Johan},
journal={arXiv preprint arXiv:2304.13994},
year={2023}
}
```
**APA:**
Kalpakchi, D., & Boye, J. (2023). SweCTRL-Mini: a data-transparent Transformer-based large language model for controllable text generation in Swedish. arXiv preprint arXiv:2304.13994.
## Model Card Authors
Dmytro Kalpakchi ([email protected])
## Model Card Contact
Dmytro Kalpakchi ([email protected])
# References
Keskar, N. S., McCann, B., Varshney, L. R., Xiong, C., & Socher, R. (2019). Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. |
masakhane/m2m100_418M_en_tsn_news | masakhane | "2022-09-24T15:05:43Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"en",
"tsn",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-05-10T14:20:25Z" | ---
language:
- en
- tsn
license: afl-3.0
---
|
matgu23/wsmrl-style | matgu23 | "2023-08-27T03:43:27Z" | 5 | 2 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-27T03:31:12Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### wsmrl-style Dreambooth model trained by matgu23 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
MCG-NJU/Tarsier-7B-RA | MCG-NJU | "2025-03-18T07:52:14Z" | 1 | 0 | null | [
"safetensors",
"llava",
"arxiv:2501.00513",
"base_model:omni-research/Tarsier-7b",
"base_model:finetune:omni-research/Tarsier-7b",
"license:mit",
"region:us"
] | null | "2025-03-17T07:54:37Z" | ---
license: mit
base_model:
- omni-research/Tarsier-7b
---
<div align="center">
<h1 style="margin: 0">
<img src="assets/logo.png" style="width:1.5em; vertical-align: middle; display: inline-block; margin: 0" alt="Logo">
<span style="vertical-align: middle; display: inline-block; margin: 0"><b>CaReBench: A Fine-grained Benchmark for Video Captioning and Retrieval</b></span>
</h1>
<p style="margin: 0">
Yifan Xu, <a href="https://scholar.google.com/citations?user=evR3uR0AAAAJ">Xinhao Li</a>, Yichun Yang, Desen Meng, Rui Huang, <a href="https://scholar.google.com/citations?user=HEuN8PcAAAAJ">Limin Wang</a>
</p>
<p align="center">
🤗 <a href="https://huggingface.co/MCG-NJU/CaRe-7B">Model</a>    |    🤗 <a href="https://huggingface.co/datasets/MCG-NJU/CaReBench">Data</a>   |    📑 <a href="https://arxiv.org/pdf/2501.00513">Paper</a>   
</p>
</div>
## 📝 Introduction
This is Tarsier 7B trained with *Retrieval Adaptation*. Refer to [our paper](https://arxiv.org/pdf/2501.00513) for details.
## Usage
Loading from the huggingface remote path is not tested. It is **recommended** to download this checkpoint to your local environment to prevent potential bugs.
### For Retrieval Tasks
```python
from utils.video import read_frames_decord
from models.modeling_encoders import AutoEncoder
from torch.nn.functional import cosine_similarity
encoder = AutoEncoder.from_pretrained('path/to/checkpoints/Tarsier-7B-RA')
frames = read_frames_decord(video_path='assets/demo.mp4', num_frames=32)
text = "This video features a man slicing tomatoes in the kitchen."
vision_emb = encoder.encode_vision(frames.unsqueeze(0))
text_emb = encoder.encode_text(text)
print(f'Vision embedding shape: {vision_emb.shape}')
print(f'Text embedding shape: {text_emb.shape}')
print(f'Cosine similarity: {cosine_similarity(vision_emb, text_emb)}')
``` |
lora-library/man-junwym-2 | lora-library | "2023-03-03T18:12:44Z" | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-03-03T18:12:40Z" | ---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of junwym
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - man-junwym-2
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "photo of junwym" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
|
cunghoctienganh/425b7041-e106-4e7f-b4e1-fa8067623904 | cunghoctienganh | "2025-01-14T06:56:39Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-14T06:51:26Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 425b7041-e106-4e7f-b4e1-fa8067623904
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6aa6dc9d696613f9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6aa6dc9d696613f9_train_data.json
type:
field_input: context
field_instruction: oracle_question
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/425b7041-e106-4e7f-b4e1-fa8067623904
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6aa6dc9d696613f9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 34194c55-9d66-41c8-ba81-20bb98f052b6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 34194c55-9d66-41c8-ba81-20bb98f052b6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 425b7041-e106-4e7f-b4e1-fa8067623904
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 111
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0641 | 1.0 | 111 | 1.1285 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ToBeWithYou/MBTI_ENFP | ToBeWithYou | "2024-02-19T18:02:28Z" | 2 | 0 | peft | [
"peft",
"tensorboard",
"arxiv:1910.09700",
"base_model:davidkim205/komt-llama2-7b-v1",
"base_model:adapter:davidkim205/komt-llama2-7b-v1",
"region:us"
] | null | "2024-02-19T18:00:49Z" | ---
library_name: peft
base_model: davidkim205/komt-llama2-7b-v1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
RichardErkhov/PraneethSunku_-_vic7b_sqlcoder7b_trial-8bits | RichardErkhov | "2025-03-16T09:31:06Z" | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-16T09:26:58Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
vic7b_sqlcoder7b_trial - bnb 8bits
- Model creator: https://huggingface.co/PraneethSunku/
- Original model: https://huggingface.co/PraneethSunku/vic7b_sqlcoder7b_trial/
Original model description:
---
base_model:
- lmsys/vicuna-7b-v1.5
- defog/sqlcoder-7b-2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5)
* [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: lmsys/vicuna-7b-v1.5
layer_range:
- 0
- 32
- model: defog/sqlcoder-7b-2
layer_range:
- 0
- 32
merge_method: slerp
base_model: lmsys/vicuna-7b-v1.5
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
|
jordyvl/resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_simkd_rand | jordyvl | "2023-11-15T11:39:17Z" | 193 | 0 | transformers | [
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-11-15T07:24:47Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_simkd_rand
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101_rvl-cdip-cnn_rvl_cdip-NK1000_simkd_rand
This model is a fine-tuned version of [bdpc/resnet101_rvl-cdip](https://huggingface.co/bdpc/resnet101_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4174
- Accuracy: 0.7665
- Brier Loss: 0.3263
- Nll: 2.0962
- F1 Micro: 0.7665
- F1 Macro: 0.7661
- Ece: 0.0504
- Aurc: 0.0700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 250 | 0.8645 | 0.1192 | 0.9514 | 3.2233 | 0.1192 | 0.0652 | 0.1115 | 0.8122 |
| 1.2523 | 2.0 | 500 | 0.7139 | 0.1797 | 0.8939 | 3.1527 | 0.1798 | 0.1283 | 0.0795 | 0.6807 |
| 1.2523 | 3.0 | 750 | 0.6662 | 0.3145 | 0.8040 | 6.3258 | 0.3145 | 0.2485 | 0.0647 | 0.4987 |
| 0.6553 | 4.0 | 1000 | 0.6265 | 0.3738 | 0.7356 | 6.0830 | 0.3738 | 0.3459 | 0.0768 | 0.4070 |
| 0.6553 | 5.0 | 1250 | 0.5609 | 0.531 | 0.6047 | 4.7056 | 0.531 | 0.5234 | 0.0639 | 0.2463 |
| 0.5525 | 6.0 | 1500 | 0.5341 | 0.589 | 0.5450 | 3.9772 | 0.589 | 0.5948 | 0.0718 | 0.1912 |
| 0.5525 | 7.0 | 1750 | 0.4938 | 0.6468 | 0.4733 | 3.3676 | 0.6468 | 0.6486 | 0.0670 | 0.1408 |
| 0.4842 | 8.0 | 2000 | 0.4765 | 0.7 | 0.4288 | 2.8692 | 0.7 | 0.6960 | 0.0666 | 0.1181 |
| 0.4842 | 9.0 | 2250 | 0.5359 | 0.5938 | 0.5534 | 3.9887 | 0.5938 | 0.6011 | 0.1211 | 0.1809 |
| 0.4476 | 10.0 | 2500 | 0.4611 | 0.7037 | 0.4122 | 2.7429 | 0.7037 | 0.6991 | 0.0679 | 0.1097 |
| 0.4476 | 11.0 | 2750 | 0.4460 | 0.7225 | 0.3913 | 2.6158 | 0.7225 | 0.7240 | 0.0725 | 0.0967 |
| 0.4219 | 12.0 | 3000 | 0.4387 | 0.7388 | 0.3752 | 2.4639 | 0.7388 | 0.7356 | 0.0696 | 0.0892 |
| 0.4219 | 13.0 | 3250 | 0.4399 | 0.7378 | 0.3724 | 2.4683 | 0.7378 | 0.7381 | 0.0550 | 0.0898 |
| 0.4007 | 14.0 | 3500 | 0.4441 | 0.737 | 0.3738 | 2.4680 | 0.737 | 0.7334 | 0.0581 | 0.0906 |
| 0.4007 | 15.0 | 3750 | 0.4517 | 0.7248 | 0.3906 | 2.5901 | 0.7248 | 0.7302 | 0.0653 | 0.0961 |
| 0.3825 | 16.0 | 4000 | 0.4430 | 0.737 | 0.3727 | 2.5633 | 0.737 | 0.7350 | 0.0595 | 0.0884 |
| 0.3825 | 17.0 | 4250 | 0.4345 | 0.7482 | 0.3518 | 2.3938 | 0.7482 | 0.7473 | 0.0541 | 0.0784 |
| 0.3672 | 18.0 | 4500 | 0.4642 | 0.7385 | 0.3690 | 2.4016 | 0.7385 | 0.7367 | 0.0571 | 0.0891 |
| 0.3672 | 19.0 | 4750 | 0.4309 | 0.7432 | 0.3585 | 2.3331 | 0.7432 | 0.7464 | 0.0558 | 0.0824 |
| 0.3547 | 20.0 | 5000 | 0.4205 | 0.7602 | 0.3418 | 2.2097 | 0.7602 | 0.7617 | 0.0470 | 0.0744 |
| 0.3547 | 21.0 | 5250 | 0.4174 | 0.7602 | 0.3387 | 2.2020 | 0.7602 | 0.7594 | 0.0488 | 0.0748 |
| 0.3442 | 22.0 | 5500 | 0.4207 | 0.7515 | 0.3458 | 2.2370 | 0.7515 | 0.7543 | 0.0540 | 0.0777 |
| 0.3442 | 23.0 | 5750 | 0.4465 | 0.733 | 0.3783 | 2.5113 | 0.733 | 0.7295 | 0.0576 | 0.0919 |
| 0.3355 | 24.0 | 6000 | 0.4391 | 0.7425 | 0.3649 | 2.4598 | 0.7425 | 0.7459 | 0.0534 | 0.0830 |
| 0.3355 | 25.0 | 6250 | 0.4233 | 0.7598 | 0.3352 | 2.2321 | 0.7598 | 0.7609 | 0.0495 | 0.0729 |
| 0.3274 | 26.0 | 6500 | 0.4174 | 0.7665 | 0.3305 | 2.2062 | 0.7665 | 0.7673 | 0.0482 | 0.0699 |
| 0.3274 | 27.0 | 6750 | 0.4153 | 0.7598 | 0.3389 | 2.2158 | 0.7598 | 0.7583 | 0.0549 | 0.0740 |
| 0.3206 | 28.0 | 7000 | 0.4175 | 0.763 | 0.3323 | 2.1843 | 0.763 | 0.7610 | 0.0494 | 0.0721 |
| 0.3206 | 29.0 | 7250 | 0.4201 | 0.7522 | 0.3467 | 2.2627 | 0.7522 | 0.7495 | 0.0576 | 0.0783 |
| 0.3147 | 30.0 | 7500 | 0.4133 | 0.7625 | 0.3334 | 2.1459 | 0.7625 | 0.7631 | 0.0477 | 0.0733 |
| 0.3147 | 31.0 | 7750 | 0.4213 | 0.7558 | 0.3421 | 2.2877 | 0.7558 | 0.7535 | 0.0567 | 0.0758 |
| 0.3092 | 32.0 | 8000 | 0.4136 | 0.7668 | 0.3294 | 2.1791 | 0.7668 | 0.7662 | 0.0465 | 0.0702 |
| 0.3092 | 33.0 | 8250 | 0.4114 | 0.7638 | 0.3331 | 2.1993 | 0.7638 | 0.7613 | 0.0517 | 0.0722 |
| 0.3046 | 34.0 | 8500 | 0.4154 | 0.764 | 0.3294 | 2.1689 | 0.764 | 0.7639 | 0.0489 | 0.0714 |
| 0.3046 | 35.0 | 8750 | 0.4119 | 0.7638 | 0.3327 | 2.1482 | 0.7638 | 0.7628 | 0.0449 | 0.0725 |
| 0.3001 | 36.0 | 9000 | 0.4183 | 0.759 | 0.3348 | 2.1775 | 0.7590 | 0.7605 | 0.0513 | 0.0731 |
| 0.3001 | 37.0 | 9250 | 0.4097 | 0.7578 | 0.3344 | 2.2029 | 0.7577 | 0.7571 | 0.0525 | 0.0736 |
| 0.2964 | 38.0 | 9500 | 0.4126 | 0.7655 | 0.3292 | 2.1374 | 0.7655 | 0.7657 | 0.0481 | 0.0710 |
| 0.2964 | 39.0 | 9750 | 0.4235 | 0.7642 | 0.3287 | 2.1640 | 0.7642 | 0.7639 | 0.0543 | 0.0707 |
| 0.293 | 40.0 | 10000 | 0.4168 | 0.7678 | 0.3284 | 2.1264 | 0.7678 | 0.7681 | 0.0494 | 0.0702 |
| 0.293 | 41.0 | 10250 | 0.4118 | 0.7682 | 0.3270 | 2.1387 | 0.7682 | 0.7684 | 0.0462 | 0.0702 |
| 0.29 | 42.0 | 10500 | 0.4151 | 0.7618 | 0.3288 | 2.1464 | 0.7618 | 0.7609 | 0.0493 | 0.0718 |
| 0.29 | 43.0 | 10750 | 0.4172 | 0.7608 | 0.3283 | 2.1341 | 0.7608 | 0.7607 | 0.0538 | 0.0708 |
| 0.2876 | 44.0 | 11000 | 0.4159 | 0.7612 | 0.3278 | 2.1561 | 0.7612 | 0.7601 | 0.0514 | 0.0707 |
| 0.2876 | 45.0 | 11250 | 0.4173 | 0.761 | 0.3291 | 2.1825 | 0.761 | 0.7602 | 0.0493 | 0.0711 |
| 0.2855 | 46.0 | 11500 | 0.4137 | 0.761 | 0.3295 | 2.1514 | 0.761 | 0.7598 | 0.0507 | 0.0709 |
| 0.2855 | 47.0 | 11750 | 0.4143 | 0.764 | 0.3278 | 2.1414 | 0.764 | 0.7630 | 0.0483 | 0.0705 |
| 0.2841 | 48.0 | 12000 | 0.4162 | 0.7668 | 0.3262 | 2.1191 | 0.7668 | 0.7666 | 0.0451 | 0.0699 |
| 0.2841 | 49.0 | 12250 | 0.4190 | 0.765 | 0.3271 | 2.1267 | 0.765 | 0.7647 | 0.0496 | 0.0701 |
| 0.283 | 50.0 | 12500 | 0.4174 | 0.7665 | 0.3263 | 2.0962 | 0.7665 | 0.7661 | 0.0504 | 0.0700 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Samsoup/Llama-3.2-3B-Instruct-MovieReviews | Samsoup | "2024-12-11T22:16:53Z" | 141 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-11T22:14:04Z" | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bunbohue/llama3-8b_readme_summarization | bunbohue | "2024-04-30T12:50:48Z" | 4 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | "2024-04-30T10:25:10Z" | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B
model-index:
- name: llama3-8b_readme_summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b_readme_summarization
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 1.9292 | 0.9998 | 2915 | 1.9142 |
| 1.4953 | 2.0 | 5831 | 1.7699 |
| 0.9958 | 2.9998 | 8746 | 1.7412 |
| 0.6889 | 3.9993 | 11660 | 1.8341 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 |
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-xh-100-percent-med-high-nv-embed | AdamKasumovic | "2024-06-20T08:25:15Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-20T08:22:59Z" | ---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
talli96123/meat_calssify_fresh_crop_fixed_epoch_80_V_0_1_best | talli96123 | "2024-06-11T19:46:17Z" | 193 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-11T19:36:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF | tensorblock | "2024-11-26T11:52:23Z" | 53 | 0 | transformers | [
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-Coder-14B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-11-26T11:03:45Z" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-14B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Qwen/Qwen2.5-Coder-14B-Instruct - GGUF
This repo contains GGUF format model files for [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Qwen2.5-Coder-14B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-Q2_K.gguf) | Q2_K | 5.770 GB | smallest, significant quality loss - not recommended for most purposes |
| [Qwen2.5-Coder-14B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-Q3_K_S.gguf) | Q3_K_S | 6.660 GB | very small, high quality loss |
| [Qwen2.5-Coder-14B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-Q3_K_M.gguf) | Q3_K_M | 7.339 GB | very small, high quality loss |
| [Qwen2.5-Coder-14B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-Q3_K_L.gguf) | Q3_K_L | 7.925 GB | small, substantial quality loss |
| [Qwen2.5-Coder-14B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-Q4_0.gguf) | Q4_0 | 8.518 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Qwen2.5-Coder-14B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-Q4_K_S.gguf) | Q4_K_S | 8.573 GB | small, greater quality loss |
| [Qwen2.5-Coder-14B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-Q4_K_M.gguf) | Q4_K_M | 8.988 GB | medium, balanced quality - recommended |
| [Qwen2.5-Coder-14B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-Q5_0.gguf) | Q5_0 | 10.267 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Qwen2.5-Coder-14B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-Q5_K_S.gguf) | Q5_K_S | 10.267 GB | large, low quality loss - recommended |
| [Qwen2.5-Coder-14B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-Q5_K_M.gguf) | Q5_K_M | 10.509 GB | large, very low quality loss - recommended |
| [Qwen2.5-Coder-14B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-Q6_K.gguf) | Q6_K | 12.125 GB | very large, extremely low quality loss |
| [Qwen2.5-Coder-14B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF/blob/main/Qwen2.5-Coder-14B-Instruct-Q8_0.gguf) | Q8_0 | 15.702 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF --include "Qwen2.5-Coder-14B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Qwen2.5-Coder-14B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
Shozi/my_awesome_model | Shozi | "2024-06-02T15:07:03Z" | 117 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-02T15:06:34Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2308
- Accuracy: 0.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2245 | 1.0 | 1563 | 0.2139 | 0.9172 |
| 0.1474 | 2.0 | 3126 | 0.2308 | 0.9315 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AlexC98/BertWhatCommitPreprocessed | AlexC98 | "2023-05-22T15:38:18Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-22T15:31:15Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BertWhatCommitPreprocessed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertWhatCommitPreprocessed
This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3631
- Accuracy: 0.9152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 38 | 0.5383 | 0.7333 |
| No log | 2.0 | 76 | 0.4130 | 0.8485 |
| No log | 3.0 | 114 | 0.3096 | 0.8727 |
| No log | 4.0 | 152 | 0.3140 | 0.8788 |
| No log | 5.0 | 190 | 0.2983 | 0.8970 |
| No log | 6.0 | 228 | 0.3019 | 0.8848 |
| No log | 7.0 | 266 | 0.3235 | 0.9030 |
| No log | 8.0 | 304 | 0.3571 | 0.8970 |
| No log | 9.0 | 342 | 0.3457 | 0.8970 |
| No log | 10.0 | 380 | 0.3340 | 0.8909 |
| No log | 11.0 | 418 | 0.3378 | 0.9091 |
| No log | 12.0 | 456 | 0.3389 | 0.9091 |
| No log | 13.0 | 494 | 0.3753 | 0.9030 |
| 0.2144 | 14.0 | 532 | 0.3492 | 0.9152 |
| 0.2144 | 15.0 | 570 | 0.3631 | 0.9152 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jtatman/functioncall-llama2-chat-q3-gguf | jtatman | "2024-02-17T15:56:44Z" | 33 | 0 | null | [
"gguf",
"functions",
"llama2",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2024-02-17T12:05:11Z" | ---
license: llama2
tags:
- functions
- llama2
- gguf
---
A gguf version of the v1 model of llama 2 function calling model in:
- fLlama-2-7b-chat.q3_K_M.gguf
GGUF versions of v3 in:
- Llama-2-7b-chat-hf-function-calling-v3-Q4_0.gguf
- Llama-2-7b-chat-hf-function-calling-v3-Q_4_K_M.gguf
- Llama-2-7b-chat-hf-function-calling-v3-Q2_K.gguf
Set up like so:
```JSON
[INST] You have access to the following functions. Use them if required:
[
{
"type": "function",
"function": {
"name": "get_big_stocks",
"description": "Get the names of the largest N stocks by market cap",
"parameters": {
"type": "object",
"properties": {
"number": {
"type": "integer",
"description": "The number of largest stocks to get the names of, e.g. 25"
},
"region": {
"type": "string",
"description": "The region to consider, can be \"US\" or \"World\"."
}
},
"required": [
"number"
]
}
}
},
{
"type": "function",
"function": {
"name": "get_stock_price",
"description": "Get the stock price of an array of stocks",
"parameters": {
"type": "object",
"properties": {
"names": {
"type": "array",
"items": {
"type": "string"
},
"description": "An array of stocks"
}
},
"required": [
"names"
]
}
}
}
]
[INST] Get the names of the five largest stocks in the US by market cap [/INST]
{
"name": "get_big_stocks",
"arguments": {
"number": 5,
"region": "US"
}
}</s>
```
or this:
```JSON
<s>[INST] <<SYS>>
You are a helpful research assistant. The following functions are available for you to fetch further data to answer user questions, if relevant:
{
"function": "search_bing",
"description": "Search the web for content on Bing. This allows users to search online/the internet/the web for content.",
"arguments": [
{
"name": "query",
"type": "string",
"description": "The search query string"
}
]
}
{
"function": "search_arxiv",
"description": "Search for research papers on ArXiv. Make use of AND, OR and NOT operators as appropriate to join terms within the query.",
"arguments": [
{
"name": "query",
"type": "string",
"description": "The search query string"
}
]
}
To call a function, respond - immediately and only - with a JSON object of the following format:
{
"function": "function_name",
"arguments": {
"argument1": "argument_value",
"argument2": "argument_value"
}
}
<</SYS>>[/INST]
[INST] Find papers on high pressure batch reverse osmosis [/INST]
```
Good results through standard llama.cpp chat web interface - also can be used for openai proxy.
Original Creds go here:
(Trelis/Llama-2-7b-chat-hf-function-calling-v3)[https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-v3] |
facebook/mms-tts-dug | facebook | "2023-09-01T10:53:58Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2023-09-01T10:53:41Z" |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Chiduruma Text-to-Speech
This repository contains the **Chiduruma (dug)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-dug")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-dug")
text = "some example text in the Chiduruma language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
feditoolbox/scnn_neonatal_fod_estimation | feditoolbox | "2025-04-09T18:57:51Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2025-04-09T18:54:22Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
bowilleatyou/b0d6fbc5-5397-4150-a95c-941f08bf8658 | bowilleatyou | "2025-03-30T17:08:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-30T16:23:47Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aardvark-labs/stp-classifier-26-1 | aardvark-labs | "2025-03-13T11:49:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-13T10:02:47Z" | ---
library_name: transformers
--- |
edugredu/t2know_bert_20iter | edugredu | "2025-01-13T18:43:05Z" | 6 | 0 | null | [
"pytorch",
"bert",
"es",
"license:apache-2.0",
"region:us"
] | null | "2025-01-13T17:27:21Z" | ---
license: apache-2.0
language:
- es
---
# A Biomedical Nested NER corpus to support T2KNOW project
This model is a fine-tuned version of the `bert-base-uncased` model using the [T2KNOW corpus](https://zenodo.org/records/12683712).
The training process involved 20 iterations.
The code for both training and evaluation of the model is available in the [T2KNOW GitHub repository](https://github.com/edugredu/T2KNOWcode)
T2KNOW available models:
- [`bert-base-uncased` with 10 iterations](https://huggingface.co/edugredu/t2know_bert_10iter)
- [`bert-base-uncased` with 20 iterations](https://huggingface.co/edugredu/t2know_bert_20iter)
- [`bert-base-uncased` with 30 iterations](https://huggingface.co/edugredu/t2know_bert_30iter)
- [`biobert-base-cased-v1.1` with 10 iterations](https://huggingface.co/edugredu/t2know_biobert_10iter)
- [`biobert-base-cased-v1.1` with 20 iterations](https://huggingface.co/edugredu/t2know_biobert_20iter)
- [`biobert-base-cased-v1.1` with 30 iterations](https://huggingface.co/edugredu/t2know_biobert_30iter) |
yzhuang/Meta-Llama-3-8B-Instruct_fictional_gsm8k_German_v1 | yzhuang | "2024-05-21T12:22:42Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-20T01:58:02Z" | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: Meta-Llama-3-8B-Instruct_fictional_gsm8k_German_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/yufanz/autotree/runs/7283704781.17487-9818c277-4a86-4343-b288-7864588621de)
# Meta-Llama-3-8B-Instruct_fictional_gsm8k_German_v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.41.0
- Pytorch 2.1.0a0+32f93b1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Xinging/llama2-7b_lora-sft_0.5_ratio_alpaca_gpt4_proj_by_mmlu_ntrain_1531_lora_adapter | Xinging | "2025-01-20T09:32:27Z" | 7 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:other",
"region:us"
] | null | "2025-01-20T09:31:57Z" | ---
library_name: peft
license: other
base_model: meta-llama/Llama-2-7b-hf
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: llama2-7b_lora-sft_0.5_ratio_alpaca_gpt4_proj_by_mmlu_ntrain_1531
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b_lora-sft_0.5_ratio_alpaca_gpt4_proj_by_mmlu_ntrain_1531
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the 0.5_ratio_alpaca_gpt4_proj_by_mmlu_ntrain_1531 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.1
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.20.3 |
Subsets and Splits