modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 00:44:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 00:44:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sercetexam9/cs221-xlm-roberta-large-eng-finetuned-10-epochs | sercetexam9 | 2025-01-09T06:18:26Z | 28 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-01-09T04:44:36Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: cs221-xlm-roberta-large-eng-finetuned-10-epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cs221-xlm-roberta-large-eng-finetuned-10-epochs
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4028
- F1: 0.7689
- Roc Auc: 0.8271
- Accuracy: 0.4644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.5889 | 1.0 | 64 | 0.5797 | 0.4679 | 0.6305 | 0.1877 |
| 0.5842 | 2.0 | 128 | 0.5531 | 0.5389 | 0.6651 | 0.2292 |
| 0.4889 | 3.0 | 192 | 0.4167 | 0.7152 | 0.7844 | 0.4150 |
| 0.3763 | 4.0 | 256 | 0.3889 | 0.7427 | 0.8070 | 0.4249 |
| 0.3043 | 5.0 | 320 | 0.3866 | 0.7479 | 0.8086 | 0.4644 |
| 0.2269 | 6.0 | 384 | 0.3805 | 0.7645 | 0.8230 | 0.4842 |
| 0.1814 | 7.0 | 448 | 0.4028 | 0.7546 | 0.8145 | 0.4684 |
| 0.1567 | 8.0 | 512 | 0.4028 | 0.7689 | 0.8271 | 0.4644 |
| 0.1332 | 9.0 | 576 | 0.3991 | 0.7685 | 0.8260 | 0.4723 |
| 0.1257 | 10.0 | 640 | 0.4022 | 0.7652 | 0.8239 | 0.4684 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
quannh197/6efa0085-e073-4cc3-84cc-0d9fd043a498 | quannh197 | 2025-01-09T06:15:29Z | 13 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama-chat",
"base_model:adapter:unsloth/tinyllama-chat",
"license:apache-2.0",
"region:us"
]
| null | 2025-01-09T06:15:00Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6efa0085-e073-4cc3-84cc-0d9fd043a498
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama-chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5f71a4aedfc9c7e0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5f71a4aedfc9c7e0_train_data.json
type:
field_input: choices
field_instruction: subject
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: quannh197/6efa0085-e073-4cc3-84cc-0d9fd043a498
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/5f71a4aedfc9c7e0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6efa0085-e073-4cc3-84cc-0d9fd043a498
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6efa0085-e073-4cc3-84cc-0d9fd043a498
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6efa0085-e073-4cc3-84cc-0d9fd043a498
This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0 | 0.08 | 1 | nan |
| 0.0 | 0.24 | 3 | nan |
| 0.0 | 0.48 | 6 | nan |
| 0.0 | 0.72 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gabrielbosse9/AuroraV2-3B | gabrielbosse9 | 2025-01-09T06:14:19Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-01-09T06:12:09Z | ---
base_model: unsloth/qwen2.5-3b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** gabrielbosse9
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/NM-Vikhr-Magnum-dare-12B-GGUF | mradermacher | 2025-01-09T06:07:34Z | 303 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:rityak/NM-Vikhr-Magnum-dare-12B",
"base_model:quantized:rityak/NM-Vikhr-Magnum-dare-12B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-08T21:34:08Z | ---
base_model: rityak/NM-Vikhr-Magnum-dare-12B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/rityak/NM-Vikhr-Magnum-dare-12B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/NM-Vikhr-Magnum-dare-12B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NM-Vikhr-Magnum-dare-12B-GGUF/resolve/main/NM-Vikhr-Magnum-dare-12B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/NM-Vikhr-Magnum-dare-12B-GGUF/resolve/main/NM-Vikhr-Magnum-dare-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/NM-Vikhr-Magnum-dare-12B-GGUF/resolve/main/NM-Vikhr-Magnum-dare-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NM-Vikhr-Magnum-dare-12B-GGUF/resolve/main/NM-Vikhr-Magnum-dare-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/NM-Vikhr-Magnum-dare-12B-GGUF/resolve/main/NM-Vikhr-Magnum-dare-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/NM-Vikhr-Magnum-dare-12B-GGUF/resolve/main/NM-Vikhr-Magnum-dare-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NM-Vikhr-Magnum-dare-12B-GGUF/resolve/main/NM-Vikhr-Magnum-dare-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NM-Vikhr-Magnum-dare-12B-GGUF/resolve/main/NM-Vikhr-Magnum-dare-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/NM-Vikhr-Magnum-dare-12B-GGUF/resolve/main/NM-Vikhr-Magnum-dare-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/NM-Vikhr-Magnum-dare-12B-GGUF/resolve/main/NM-Vikhr-Magnum-dare-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NM-Vikhr-Magnum-dare-12B-GGUF/resolve/main/NM-Vikhr-Magnum-dare-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
devhem/modernBERT-emotion | devhem | 2025-01-09T06:07:11Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-01-09T05:48:37Z | ---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: modernBERT-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modernBERT-emotion
This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4445
- Accuracy: 0.5587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0661 | 1.0 | 1289 | 1.6502 | 0.5164 |
| 1.475 | 2.0 | 2578 | 1.4890 | 0.5509 |
| 1.3346 | 3.0 | 3867 | 1.4586 | 0.5647 |
| 1.2821 | 4.0 | 5156 | 1.4445 | 0.5587 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
trongvox/Phobert-Sentence | trongvox | 2025-01-09T06:05:43Z | 10 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:11347",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:vinai/phobert-base",
"base_model:finetune:vinai/phobert-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-01-09T06:05:12Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:11347
- loss:MultipleNegativesRankingLoss
base_model: vinai/phobert-base
widget:
- source_sentence: "Beefsteak 123 la mot dia chi ban banh mi chao, beefsteak cuc ngon\
\ tai Can Tho ma ban nen mot gan ghe den. Khong gian quan rong rai, sach se, phuc\
\ vu nhanh nhen, gia ca hop ly. Banh mi chao duong Nguyen Van Troi noi tieng ban\
\ banh mi thom ngon, chat luong. Banh mi tai day chia ra lam 2 phan: co thit bo\
\ ma khong thit bo.\n\nQuan Beefsteak 123 la mot dia diem ly tuong cho nhung nguoi\
\ yeu thich thit bo va cac mon an ngon khac. Quan noi tieng voi su ket hop tuyet\
\ voi giua thit bo, pate va trung op la. Neu ban muon thu nhung mon khac, quan\
\ cung co san xuc xich, ca moi, cha lua va xiu mai. Menu cua quan duoc chia thanh\
\ tung phan da duoc ket hop san de ban de dang lua chon. Vi du nhu bo op la pate\
\ xuc xich hoac bo op la pate cha lua. Ban cung co the tao ra cac to hop rieng\
\ cua rieng minh nhu op la ca moi xiu mai.Mot dieu dac biet khi den quan la khi\
\ ban goi mot phan, ban se duoc tang mien phi mot dia xa lach tron. Day la cach\
\ hoan hao de ket hop khau vi cua ban voi cac loai rau song tuoi ngon.Voi khong\
\ gian thoai mai va phuc vu nhanh chong, quan Beefsteak 123 mang den cho ban trai\
\ nghiem am thuc doc dao va ngon mieng. Hay ghe tham quan de thuong thuc nhung\
\ mon an tuyet voi nay!\n\nTHONG TIN LIEN HE:\nDia chi: 9B Nguyen Van Troi, Phuong\
\ Xuan Khanh, Can Tho\nDien thoai: 0907 713 458\nGio mo cua: 06:00 - 14:00\nGia\
\ tham khao: 20.000d - 40.000d\nFanpage: https://www.facebook.com/Beefsteak-123-143170999350605/\n\
\n Goi dien"
sentences:
- Beefsteak 123 - Nguyen Van Troi
- Pho Ngon 37
- Khong tra no hay chi tien ngay Tet
- source_sentence: 'KCC - Pho & Com Ga Xoi Mam la quan an duoc nhieu nguoi yeu thich
tai so 6 Ton That Thuyet, Nam Tu Liem, Ha Noi. Noi day voi khong gian am cung,
rat thich hop cho nhung bua an ben ban be, dong nghiep. Day la quan duoc nhieu
thuc khach danh gia cao ca dich vu lan chat luong do an. Den voi KCC - Pho & Com
Ga Xoi Mam ngoai pho la mon duoc yeu thich nhat ra, quan con co vo so cac mon
an hap dan nhu: com rang dui ga xoi mam, com rang dua bo, com rang cai bo, pho
xao bo, com nong dui ga xoi mam, mi xao bo, com nong cai bo, com nong dua bo.
Doc va la tu nhung hat com gion rum, cung voi do la huong vi cua nuoc sot dac
trung va bi truyen ngam sau vao tan ben trong.
Cac mon nay tuy binh di trong cach che bien nhung mang lai huong vi am thuc manh
me, du de lam to mo bat cu thuc khach nao khi thuong thuc. KCC - Pho & Com Ga
Xoi Mam cam ket mang den cho nguoi tieu dung nhung san pham ngon an toan, co loi
cho suc khoe voi gia rat hop ly. Ban dang o Ton That Thuyet, Ha Noi va dang ban
khoan khong biet dia chi an pho nao ngon thi hay ghe ngay quan an KCC nhe!
THONG TIN LIEN HE: Dia chi: 6 Ton That Thuyet, Nam Tu Liem, Ha Noi Gio mo cua: 06:00
- 14:00 | 17:30 - 22:00
Dat mua ngay'
sentences:
- Nem Nuong Hai Anh
- Ca basa kho thom
- KCC - Pho & Com Ga Xoi Mam
- source_sentence: Banh canh ca loc duoc lam tu bot gao va ca loc. Bot gao sau khi
duoc can mong thanh soi vua an thi duoc tha vao noi nuoc luoc Ca loc go lay phan
thit, uop chut gia vi cho dam vi. Phan xuong ca khong bi bo di ma duoc giu lai
gia nhuyen, loc lay phan nuoc ca roi do vao phan nuoc dung. Mon banh canh ca loc
ngon nhat la khi an con nong, vua chan vua hup vua xuyt xoa cai vi cay nong. Neu
an trong ngay dong thi qua tuyet voi roi phai khong nao. Mot to banh canh ca loc
chi co gia khoang 30.000 dong thoi cac ban nhe.
sentences:
- Banh canh ca loc
- Bun oc, bun oc chan
- Nha hang Trung Duong Marina
- source_sentence: 'Nguyen lieu:Bap chuoi 1 cai Chanh 1 trai Bot chien gion 75 gr
Dau an 100 ml Nuoc mam 3 muong canh Bot ngot 1 muong ca phe Tuong ot 1 muong canh
Duong 1 muong canh Ot bot 1 muong ca pheCach che bien:So che bap chuoi: Dung tay
tach bap chuoi thanh nhung cong nho, sau do ngam bap chuoi vao trong thau nuoc
chanh pha loang de giup bap chuoi khong bi tham den. Tiep tuc go bo nhuy trong
bap chuoi roi rua sach lai voi nuoc.Nhung bot va chien bap chuoi: Bap chuoi sau
khi tach roi va rua sach ban cho bap chuoi ra to, do vao 75gr bot chien gion,
dao deu cho bot tham vao bap chuoi. Bac chao len bep cung voi 100ml dau an dun
soi (luong dau ngap bap chuoi), sau do cho bap chuoi da ao bot vao chien tren
lua vua khoang 5 - 10 phut cho bap chuoi chin vang deu thi vot ra de rao dau.Lam
bap chuoi chien nuoc mam: Bac mot cai chao khac cho vao 10ml dau an (tan dung
luong dau con du khi chien bap chuoi), roi cho vao 3 muong canh nuoc mam, 1 muong
ca phe bot ngot, 1 muong canh tuong ot, 1 muong canh duong, 1 muong ca phe ot
bot khuay tan hon hop cho sanh vang lai khoang 3 phut tren lua vua. Cuoi cung
ban cho bap chuoi da chien vang vao dao deu them 3 phut roi tat bep.Thanh pham:
Bap chuoi gion rum hoa quyen voi vi man man ngot ngot cua nuoc mam, an kem com
trang se cuc ki ngon mieng day. Mon an vo cung de lam nay se khien gia dinh ban
tam tac khen ngon.'
sentences:
- Nha Hang Ca Hoi Song Nhi
- Com nhoi thit hap ot chuong
- Hoa chuoi chien nuoc mam
- source_sentence: "Noi tieng ve do lau doi va huong vi mon an nay o Ha Noi thi phai\
\ ke den hang Banh Duc Nong Thanh Tung. Banh o day hap dan o do deo dai cua bot,\
\ thit nam du day va nem nem vua mieng. Khi phuc vu, mon an nong sot toa ra mui\
\ huong thom lung tu bot, hanh phi, nuoc mam. Mon banh duc o day duoc chan ngap\
\ nuoc mam pha loang vi ngot, hoi man man, co thit bam voi nam meo va rat nhieu\
\ hanh kho da phi vang.Mon banh duc o Banh Duc Nong Thanh Tung duoc chan ngap\
\ nuoc mam pha loang vi ngot, hoi man man, co thit bam voi nam meo va rat nhieu\
\ hanh kho da phi vang. Cach an nay hoi giong voi mon banh gio chan nuoc mam thit\
\ bam o quan pho chua Lang Son gan cho Ban Co. La mon qua an nhe nhang, vua du\
\ lung lung bung, co ve dan da nen rat nhieu nguoi them them, nho nho. Banh duc\
\ nong Ha Noi o day khong bi pha them bot dau xanh nen van giu nguyen duoc huong\
\ vi dac trung. Dac biet, phan nhan con duoc tron them mot it cu dau xao tren\
\ ngon lua lon nen giu duoc do ngot gion.THONG TIN LIEN HE:Dia chi: 112 Truong\
\ Dinh, Quan Hai Ba Trung, Ha NoiGio mo cua: 10:00 - 21:00Dia diem chat luong:\
\ 4.7/5 (14 danh gia tren Google)\n Chi duong Danh gia Google"
sentences:
- Banh Duc
- Let's Eat Buffet
- Banh bi do
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on vinai/phobert-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [vinai/phobert-base](https://huggingface.co/vinai/phobert-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) <!-- at revision c1e37c5c86f918761049cef6fa216b4779d0d01d -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("trongvox/Phobert-Sentence")
# Run inference
sentences = [
'Noi tieng ve do lau doi va huong vi mon an nay o Ha Noi thi phai ke den hang Banh Duc Nong Thanh Tung. Banh o day hap dan o do deo dai cua bot, thit nam du day va nem nem vua mieng. Khi phuc vu, mon an nong sot toa ra mui huong thom lung tu bot, hanh phi, nuoc mam. Mon banh duc o day duoc chan ngap nuoc mam pha loang vi ngot, hoi man man, co thit bam voi nam meo va rat nhieu hanh kho da phi vang.Mon banh duc o Banh Duc Nong Thanh Tung duoc chan ngap nuoc mam pha loang vi ngot, hoi man man, co thit bam voi nam meo va rat nhieu hanh kho da phi vang. Cach an nay hoi giong voi mon banh gio chan nuoc mam thit bam o quan pho chua Lang Son gan cho Ban Co. La mon qua an nhe nhang, vua du lung lung bung, co ve dan da nen rat nhieu nguoi them them, nho nho. Banh duc nong Ha Noi o day khong bi pha them bot dau xanh nen van giu nguyen duoc huong vi dac trung. Dac biet, phan nhan con duoc tron them mot it cu dau xao tren ngon lua lon nen giu duoc do ngot gion.THONG TIN LIEN HE:Dia chi: 112 Truong Dinh, Quan Hai Ba Trung, Ha NoiGio mo cua: 10:00 - 21:00Dia diem chat luong: 4.7/5 (14 danh gia tren Google)\n Chi duong Danh gia Google',
'Banh Duc',
'Banh bi do',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 11,347 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:-------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 73 tokens</li><li>mean: 127.74 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 8.16 tokens</li><li>max: 24 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------|
| <code>Mamadeli la mot dia chi giup ban giai quyet con them com ga, mi y chuan vi nhat. Nhan vien tai quan nay kha de chiu va chieu khach. Mot suat com ga ta bao gom mot phan com mem, thit ga ta xe thom phuc va dia nuoc mam gung chan voi sot trung rat dam da.Giua long Sai Gon hoa le lai co huong vi cua mon com ga nuc tieng thi con dieu gi khien ban ban khoan ma khong thuong thuc nhi. Thuc don phong phu, gia ca phai chang voi huong vi mon an hoan hao dung vi hap dan la li do giup quan thu hut duoc dong dao khach hang ghe toi thuong xuyen.<br><br>Ngoai ra, voi cach trinh bay mon an day bat mat va mau sac chac chan cac thuc khach khi den day se khong the roi mat khoi mon an dau. Team thich song ao tung chao nghe toi day chac hao huc lam vi do an vua ngon, vua co hinh de song ao chat luong.Va khien ai cung thom them ghen ti khi ban co co hoi duoc thu va trai nghiem o Mamadeli do. Neu ban muon tan huong tai nha thi hay yen tam, Mamadeli hien tai da co mat tren cac app giao hang, cac ban co the theo doi...</code> | <code>Mamadeli - Com ga & Mi y</code> |
| <code>Nguyen lieu:Thit heo xay 300 gr Toi bam 2 muong ca phe Hanh tim bam 2 muong ca phe Gung bam 1 muong ca phe Nuoc mam 1/2 muong canh Nuoc tuong 1 muong canh Bot nang 2 muong canh Giam an 2 muong canh Tuong ca 3 muong canh Dau an 2 muong canh Duong 4 muong canh Muoi 1/4 muong canhCach che bien Thit vien kho chua ngotUop thitBan uop thit voi 2 muong ca phe toi bam, 2 muong ca phe hanh tim, 1 muong ca phe gung bam, 1/4 muong ca phe muoi, 1/2 muong canh nuoc mam, 1 muong canh nuoc tuong, 2 muong canh bot nang.Sau do, ban tron deu de cac gia vi ngam vao nhau va uop khoang 15 phut.<br>Vo vien va chien thitBan vo thit thanh tung vien vua an.Ban dun nong 2 muong canh dau an o lua vua. Khi dau soi, ban cho thit vao va chien vang deu 2 mat.<br>Kho thitBan cho vao chao 4 muong canh duong, 2 muong canh giam an, 3 muong canh tuong ca va 4 muong canh nuoc loc roi dao deu.Ban rim phan nuoc sot voi thit vien 15 phut sau do tat bep va cho ra dia.<br>Thanh phamThit vien mem, thom, vua an cung voi nuoc sot chua chu...</code> | <code>Thit vien kho chua ngot</code> |
| <code>Nguyen lieu:1kg oc1 cu gungHanh khoToi, otSa teNuoc mam, bot ngot, duong...Cach lam:Oc giac khi mua ve, ban cung dem rua sach, roi ngam voi nuoc vo gao co cat them voilat ot trong 3 tieng de oc nhanh nha chat ban ra.Gung ban dem cao vo rua sach, bam nho.Hanh kho, toi boc sach vo. Hanh kho ban thai lat mong, con toi thi bam nhuyen.Ot tuoi rua sach, thai lat.Sau khi ngam xong, ban dem oc giac luoc voi nuoc co cho them vai lat gung hoac sa dap dap. Khi oc chin, ban lay thit oc ra cat lat va de ra dia. Dat chao len bep, cho dau an vao, khi dau soi ban cho hanh kho va toi vao phi thom. Tiep den, ban cho vao 3 muong sa te, ot cat lat, dao deu tay. Dao khoang 5 phut, ban cho oc vao deu roi nem nem voi nuoc mam, duong, bot ngot sao cho vua khau vi. Xao oc khoang 10 phut nua thi tat bep.Vay la hoan thanh mon an roi, gio day ban chi can cho mon an ra dia va cho them vai soi rau ram len tren la xong!</code> | <code>Oc giac xao sa te</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.7042 | 500 | 0.9125 |
| 1.4085 | 1000 | 0.2277 |
| 2.1127 | 1500 | 0.1527 |
| 2.8169 | 2000 | 0.1009 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
phungkhaccuong/40eb2fb6-544f-5188-5d5e-f3d9b221e03c | phungkhaccuong | 2025-01-09T06:01:48Z | 15 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B",
"base_model:adapter:unsloth/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
]
| null | 2025-01-09T05:40:24Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 40eb2fb6-544f-5188-5d5e-f3d9b221e03c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5a442201734d24c5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5a442201734d24c5_train_data.json
type:
field_instruction: prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: phungkhaccuong/40eb2fb6-544f-5188-5d5e-f3d9b221e03c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/5a442201734d24c5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ba581326-6778-4c19-88e7-8452f911853c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ba581326-6778-4c19-88e7-8452f911853c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 40eb2fb6-544f-5188-5d5e-f3d9b221e03c
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B](https://huggingface.co/unsloth/Qwen2.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0009 | 10 | nan |
| 0.0 | 0.0018 | 20 | nan |
| 0.0 | 0.0026 | 30 | nan |
| 0.0 | 0.0035 | 40 | nan |
| 0.0 | 0.0044 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
zhouyik/colva_internvl2_4b | zhouyik | 2025-01-09T06:01:36Z | 103 | 0 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2501.04670",
"base_model:OpenGVLab/InternVL2-4B",
"base_model:finetune:OpenGVLab/InternVL2-4B",
"license:mit",
"region:us"
]
| image-text-to-text | 2025-01-07T03:07:47Z | ---
license: mit
base_model:
- OpenGVLab/InternVL2-4B
- nvidia/RADIO
pipeline_tag: image-text-to-text
library_name: transformers
---
# CoLVA
[\[📂 GitHub\]](https://github.com/zhouyiks/CoLVA) [\[📜 Paper\]](https://arxiv.org/abs/2501.04670)
## Introduction
As an initial effort to address the systematic shortcomings of matching capabilities in recent multimodal LLMs (MLLMs),
we release CoLVA, a novel contrastive MLLM with two novel technical designs:
fine-grained vision expert with object-level contrastive learning and instruction augmentation strategy.
This repository holds the model weights and inference codes of CoLVA that is built on InternVL2-4B.
## Quik Start
We provide an example code to run `CoLVA` using `transformers`.
> Please use transformers>=4.47.0 to ensure the model works normally.
### Model Loading
#### 16-bit (bf16 / fp16)
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "zhouyik/colva_internvl2_4b"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval().cuda()
```
### Inference with Transformers
```python
import os
import json
import cv2
import random
from typing import List
import pycocotools.mask as mask_util
import numpy as np
import torch
from transformers import AutoModel, AutoTokenizer
import torchvision.transforms as T
from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
import torch.nn.functional as F
from transformers import CLIPImageProcessor
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
VPT_CONTEXT_TOKEN = '<VPT_CONTEXT>'
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=6, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=6, upscale=False):
if isinstance(image_file, str):
image = Image.open(image_file).convert('RGB')
else:
image = image_file.convert('RGB')
if upscale:
image = image.resize((image.width * 2, image.height * 2), Image.BILINEAR)
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
def polygons_to_bitmask(polygons: List[np.ndarray], height: int, width: int) -> np.ndarray:
"""
Args:
polygons (list[ndarray]): each array has shape (Nx2,)
height, width (int)
Returns:
ndarray: a bool mask of shape (height, width)
"""
if len(polygons) == 0:
# COCOAPI does not support empty polygons
return np.zeros((height, width)).astype(bool)
rles = mask_util.frPyObjects(polygons, height, width)
masks = mask_util.decode(rles)
reduced = np.add.reduce(masks, axis=2)
m = np.where(reduced>=2, 0, reduced)
# rle = mask_util.merge(rles)
return m.astype(bool)
from distinctipy import distinctipy
def contour_rendering(image, masks, mask_ids=None):
colors = distinctipy.get_colors(len(masks)+1)
font = cv2.FONT_HERSHEY_SIMPLEX
text_thickness = 2
font_scale_list = []
label_list = []
color_list = []
label_loc_list = []
for anno_i in range(len(masks)):
mask = masks[anno_i]
contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
if colors[anno_i][0] > 0.9 and colors[anno_i][1] > 0.9 and colors[anno_i][2] > 0.9:
color_anno_i = (colors[-1][2] * 255, colors[-1][1] * 255, colors[-1][0] * 255)
else:
color_anno_i = (colors[anno_i][2] * 255, colors[anno_i][1] * 255, colors[anno_i][0] * 255)
cv2.drawContours(image, contours, -1, color=color_anno_i, thickness=2)
cnt_area = []
cnt_centroid = []
cnt_bbox = []
for cnt in contours:
cnt_area.append(cv2.contourArea(cnt))
M = cv2.moments(cnt)
x, y, w, h = cv2.boundingRect(cnt)
if M["m00"] > 0:
cx = int(M["m10"] / M["m00"])
cy = int(M["m01"] / M["m00"])
else:
cx, cy = x + w/2, y + h/2
cnt_centroid.append((cx, cy))
cnt_bbox.append((w, h))
select_cnt = 0
if len(cnt_area) > 1:
select_cnt = np.argmax(np.array(cnt_area))
select_centroid = cnt_centroid[select_cnt]
visual_prompt_id = anno_i+1 if mask_ids is None else mask_ids[anno_i]
boxW, boxH = cnt_bbox[select_cnt]
if max(boxH, boxW) < 25:
thickness=1
else:
thickness=text_thickness
# find the optimal font scale: text width/height close to 1/5 of the bbox width/height
ok = False
for scale in reversed(range(5, 60, 1)):
textSize = cv2.getTextSize(f"{visual_prompt_id}", font, scale/10, thickness)
textW, textH = textSize[0][0], textSize[0][1]
if textH / boxH > 0.15 or textW / boxW > 0.15:
continue
font_scale_list.append(scale/10)
ok = True
break
if not ok:
font_scale_list.append(0.5)
label_list.append(visual_prompt_id)
color_list.append(color_anno_i)
(base_w, base_h), bottom = cv2.getTextSize(f"{visual_prompt_id}", font, font_scale_list[-1], thickness)
label_loc_list.append((
int(select_centroid[0] - base_w/2),
int(select_centroid[1] + (base_h+bottom)/2)
))
font_scale = min(font_scale_list)
for anno_i in range(len(label_list)):
(base_w, base_h), bottom = cv2.getTextSize(f"{label_list[anno_i]}", font, font_scale, thickness)
cv2.rectangle(image, (label_loc_list[anno_i][0], int(label_loc_list[anno_i][1]-base_h-bottom/2)),
(label_loc_list[anno_i][0]+base_w, int(label_loc_list[anno_i][1]+bottom/2)),
color_list[anno_i], -1, 8)
cv2.putText(image, f"{label_list[anno_i]}", label_loc_list[anno_i], font, font_scale,
(255, 255, 255), thickness)
return None
path = "zhouyik/colva_internvl2_4b"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
generation_config = dict(max_new_tokens=1024, do_sample=True)
# pure-text conversation
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Can you tell me a story?'
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# image-text conversation
pixel_values = load_image(os.path.join(path, "examples/image1.jpg"), max_num=12).to(torch.bfloat16).cuda()
question = '<image>\nPlease describe the image shortly.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question}\nAssistant: {response}')
# muti-images object matching
image_path_list = [os.path.join(path, "examples/match_case/FRAME00_ORI.jpg"), os.path.join(path, "examples/match_case/FRAME01_ORI.jpg")]
anno_file_list = [os.path.join(path, "examples/match_case/FRAME00.json"), os.path.join(path, "examples/match_case/FRAME01_CAND.json")]
# load annotations
region_list = []
for query_json_file in anno_file_list[:-1]:
with open(query_json_file, 'r') as f:
query_anno = json.load(f)
ori_height, ori_width = query_anno[0]['height'], query_anno[0]['width']
segm = query_anno[0]['segmentation']
segm = [np.array(poly) for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6]
mask = polygons_to_bitmask(segm, ori_height, ori_width)
region_list.append(mask[np.newaxis, :, :].astype(np.uint8))
with open(anno_file_list[-1], 'r') as f:
query_anno = json.load(f)
all_masks = []
for idx in range(len(query_anno)):
ori_height, ori_width = query_anno[idx]['height'], query_anno[idx]['width']
segm = query_anno[idx]['segmentation']
segm = [np.array(poly) for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6]
mask = polygons_to_bitmask(segm, ori_height, ori_width)
all_masks.append(mask)
all_masks = np.stack(all_masks, axis=0)
region_list.append(all_masks.astype(np.uint8))
# draw the visual prompts on the image
overlied_images = [cv2.imread(img_file) for img_file in image_path_list]
for fidx, (image, regions) in enumerate(zip(overlied_images[:-1], region_list[:-1])):
for region in regions:
contours, hierarchy = cv2.findContours(region, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(overlied_images[fidx], contours, -1, color=(255, 255, 0), thickness=2)
random_id = list(range(1, len(region_list[-1])+1))
random.shuffle(random_id)
all_region_ids = random_id
contour_rendering(overlied_images[-1], region_list[-1], random_id)
for fidx, overlied_image in enumerate(overlied_images):
cv2.imwrite(f"./overlied_image_{fidx+1}.jpg", overlied_image)
overlied_images = [Image.fromarray(cv2.cvtColor(item, cv2.COLOR_BGR2RGB)) for item in overlied_images]
# prepare radio inputs
ot_image_processor = CLIPImageProcessor.from_pretrained("./nvidia/RADIO", trust_remote_code=True)
ot_images = [Image.open(image_name).convert('RGB') for image_name in image_path_list]
ot_pixel_values, ot_visual_prompts = [], []
for fi, image in enumerate(ot_images):
w, h = image.size
if w > h:
target_size = (1024, int(h/w*1024))
else:
target_size = (int(w/h*1024), 1024)
resized_image = image.resize(target_size)
cur_w, cur_h = resized_image.size
padded_image = np.ones(shape=(1024, 1024, 3), dtype=np.uint8) * 255
padded_image[:cur_h, :cur_w, :] = np.array(resized_image)
ot_pixel_values.append(ot_image_processor(images=Image.fromarray(padded_image), return_tensors='pt').pixel_values)
ot_pixel_values = torch.cat(ot_pixel_values).to(torch.bfloat16).cuda()
for regions in region_list:
h, w = regions.shape[-2:]
regions = torch.from_numpy(regions).to(ot_pixel_values.dtype).to(ot_pixel_values.device)
if h > w:
padded_regions = regions.new_zeros((regions.shape[0], h, h))
else:
padded_regions = regions.new_zeros((regions.shape[0], w, w))
padded_regions[:, :h, :w] = regions
resized_padded_regions = F.interpolate(padded_regions.unsqueeze(0), size=(1024, 1024), mode='bilinear').squeeze(0)
ot_visual_prompts.append(resized_padded_regions)
# prepare choice items
choice_names = [f"{chr(i)}" for i in range(65,91)]
if len(regions) > len(choice_names) - 1:
valid_num = len(choice_names) - 1
else:
valid_num = len(regions)
region_ids = random_id[:valid_num]
choice_names = choice_names[:valid_num+1]
region_ids.sort()
multi_choices_str = ""
for choice_name, region_id in zip(choice_names[:-1], region_ids):
multi_choices_str = multi_choices_str + f"{choice_name}. {region_id}\n"
multi_choices_str = multi_choices_str + f"{choice_names[-1]}. None of the above choices are correct\n"
question = "Here are two images. In the second image, I have marked several "\
"visual objects with their contours in different colors, and each "\
"is identified by a white numeric ID against a background that "\
"matches the contour's color. Could you please tell me which of "\
"these marked objects is the same as the object marked with a cyan "\
"contour in the first image? Please make a choice from the following options: \n"
object_token_str = ""
for fidx in range(len(overlied_images)-1):
object_token_str = object_token_str + f"Objects in Image-{fidx+1}: <query object>{VPT_CONTEXT_TOKEN}\n"
object_token_str = object_token_str + f"Objects in Image-{len(overlied_images)}: "
sorted_indices = sorted(range(len(all_region_ids)), key=lambda k: all_region_ids[k])
for sorted_idx in sorted_indices:
object_token_str = object_token_str + f"<object-{all_region_ids[sorted_idx]}>{VPT_CONTEXT_TOKEN}, "
object_token_str = object_token_str[:-2] + '.\n'
prefix_str = f"Image-1: <image>\nImage-2: <image>\n" + object_token_str
question = prefix_str + question + multi_choices_str
num_patches_list = []
pixel_values_list = []
for overlied_image in overlied_images:
pixel_values = load_image(overlied_image, max_num=12).to(torch.bfloat16).cuda()
pixel_values_list.append(pixel_values)
num_patches_list.append(pixel_values.size(0))
pixel_values = torch.cat(pixel_values_list, dim=0)
response, history = model.chat(tokenizer, pixel_values, question, generation_config, return_history=True,
num_patches_list=num_patches_list, ot_pixel_values=ot_pixel_values, ot_visual_prompts=ot_visual_prompts)
print(f'User: {question}\nAssistant: {response}')
question = "Why are they the same one?"
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True,
num_patches_list=num_patches_list, ot_pixel_values=ot_pixel_values, ot_visual_prompts=ot_visual_prompts)
print(f'User: {question}\nAssistant: {response}')
```
## License
This project is released under the MIT License. This project uses the pre-trained InternVL2-4B as a component, which is also licensed under the MIT License.
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@misc{zhou2025sameexploringvisualcorrespondence,
title={Are They the Same? Exploring Visual Correspondence Shortcomings of Multimodal LLMs},
author={Yikang Zhou and Tao Zhang and Shilin Xu and Shihao Chen and Qianyu Zhou and Yunhai Tong and Shunping Ji and Jiangning Zhang and Xiangtai Li and Lu Qi},
year={2025},
eprint={2501.04670},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2501.04670},
}
``` |
mradermacher/Sorah-Llama-3.2-3B-Instruct-GGUF | mradermacher | 2025-01-09T05:58:01Z | 612 | 1 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"base_model:HirCoir/Sorah-Llama-3.2-3B-Instruct",
"base_model:quantized:HirCoir/Sorah-Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-09T05:27:43Z | ---
base_model: HirCoir/Sorah-Llama-3.2-3B-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/HirCoir/Sorah-Llama-3.2-3B-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Sorah-Llama-3.2-3B-Instruct-GGUF/resolve/main/Sorah-Llama-3.2-3B-Instruct.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Sorah-Llama-3.2-3B-Instruct-GGUF/resolve/main/Sorah-Llama-3.2-3B-Instruct.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Sorah-Llama-3.2-3B-Instruct-GGUF/resolve/main/Sorah-Llama-3.2-3B-Instruct.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Sorah-Llama-3.2-3B-Instruct-GGUF/resolve/main/Sorah-Llama-3.2-3B-Instruct.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sorah-Llama-3.2-3B-Instruct-GGUF/resolve/main/Sorah-Llama-3.2-3B-Instruct.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sorah-Llama-3.2-3B-Instruct-GGUF/resolve/main/Sorah-Llama-3.2-3B-Instruct.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sorah-Llama-3.2-3B-Instruct-GGUF/resolve/main/Sorah-Llama-3.2-3B-Instruct.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sorah-Llama-3.2-3B-Instruct-GGUF/resolve/main/Sorah-Llama-3.2-3B-Instruct.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Sorah-Llama-3.2-3B-Instruct-GGUF/resolve/main/Sorah-Llama-3.2-3B-Instruct.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Sorah-Llama-3.2-3B-Instruct-GGUF/resolve/main/Sorah-Llama-3.2-3B-Instruct.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Sorah-Llama-3.2-3B-Instruct-GGUF/resolve/main/Sorah-Llama-3.2-3B-Instruct.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Sorah-Llama-3.2-3B-Instruct-GGUF/resolve/main/Sorah-Llama-3.2-3B-Instruct.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
visdata/bb_08 | visdata | 2025-01-09T05:56:14Z | 197 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-09T05:30:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KaKee/llama-2-7b_full_unknown_various_answer_unknown_seed_0_epoch1 | KaKee | 2025-01-09T05:55:04Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-09T05:48:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso08/bf75b2d9-3b25-4926-812f-37d98b0f09d3 | lesso08 | 2025-01-09T05:54:28Z | 12 | 0 | peft | [
"peft",
"safetensors",
"qwen2_moe",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-qwen1.5-moe",
"base_model:adapter:katuni4ka/tiny-random-qwen1.5-moe",
"region:us"
]
| null | 2025-01-09T05:43:34Z | ---
library_name: peft
base_model: katuni4ka/tiny-random-qwen1.5-moe
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bf75b2d9-3b25-4926-812f-37d98b0f09d3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-qwen1.5-moe
bf16: true
chat_template: llama3
datasets:
- data_files:
- b4d2ba1803d2784f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b4d2ba1803d2784f_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso08/bf75b2d9-3b25-4926-812f-37d98b0f09d3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 8
mlflow_experiment_name: /tmp/b4d2ba1803d2784f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1c169d6e-e69f-4e42-bc7b-e7be25bc6d95
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1c169d6e-e69f-4e42-bc7b-e7be25bc6d95
warmup_steps: 10
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# bf75b2d9-3b25-4926-812f-37d98b0f09d3
This model is a fine-tuned version of [katuni4ka/tiny-random-qwen1.5-moe](https://huggingface.co/katuni4ka/tiny-random-qwen1.5-moe) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9254 | 0.0003 | 1 | 11.9281 |
| 11.9347 | 0.0013 | 4 | 11.9278 |
| 11.9369 | 0.0026 | 8 | 11.9266 |
| 11.93 | 0.0039 | 12 | 11.9245 |
| 11.9276 | 0.0052 | 16 | 11.9224 |
| 11.9214 | 0.0065 | 20 | 11.9212 |
| 11.9211 | 0.0078 | 24 | 11.9208 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
leixa/1c169d6e-e69f-4e42-bc7b-e7be25bc6d95 | leixa | 2025-01-09T05:47:38Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2_moe",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-qwen1.5-moe",
"base_model:adapter:katuni4ka/tiny-random-qwen1.5-moe",
"region:us"
]
| null | 2025-01-09T05:43:02Z | ---
library_name: peft
base_model: katuni4ka/tiny-random-qwen1.5-moe
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1c169d6e-e69f-4e42-bc7b-e7be25bc6d95
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-qwen1.5-moe
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b4d2ba1803d2784f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b4d2ba1803d2784f_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: leixa/1c169d6e-e69f-4e42-bc7b-e7be25bc6d95
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b4d2ba1803d2784f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: leixa-personal
wandb_mode: online
wandb_name: 1c169d6e-e69f-4e42-bc7b-e7be25bc6d95
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1c169d6e-e69f-4e42-bc7b-e7be25bc6d95
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 1c169d6e-e69f-4e42-bc7b-e7be25bc6d95
This model is a fine-tuned version of [katuni4ka/tiny-random-qwen1.5-moe](https://huggingface.co/katuni4ka/tiny-random-qwen1.5-moe) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.8710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 11.9284 |
| 11.9146 | 0.0081 | 50 | 11.9143 |
| 11.8898 | 0.0162 | 100 | 11.8886 |
| 11.8762 | 0.0243 | 150 | 11.8732 |
| 11.8768 | 0.0324 | 200 | 11.8710 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
PrunaAI/Theros-L3-ColdBrew-Altair-test-bnb-8bit-smashed | PrunaAI | 2025-01-09T05:45:08Z | 8 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"base_model:Theros/L3-ColdBrew-Altair-test",
"base_model:quantized:Theros/L3-ColdBrew-Altair-test",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-01-09T05:35:24Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: Theros/L3-ColdBrew-Altair-test
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Theros/L3-ColdBrew-Altair-test installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/Theros-L3-ColdBrew-Altair-test-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("Theros/L3-ColdBrew-Altair-test")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Theros/L3-ColdBrew-Altair-test before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
visdata/bb_07 | visdata | 2025-01-09T05:42:24Z | 33 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-09T05:30:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phungkhaccuong/afbdf303-5ec3-cb90-b17b-283c6f8ad70b | phungkhaccuong | 2025-01-09T05:39:23Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
]
| null | 2025-01-09T05:29:09Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: afbdf303-5ec3-cb90-b17b-283c6f8ad70b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3f2dea70a689bc2e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3f2dea70a689bc2e_train_data.json
type:
field_input: act
field_instruction: task_description
field_output: judgement
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: phungkhaccuong/afbdf303-5ec3-cb90-b17b-283c6f8ad70b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/3f2dea70a689bc2e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2650387b-5b9e-476b-85c6-ce62d4bde2e2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2650387b-5b9e-476b-85c6-ce62d4bde2e2
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# afbdf303-5ec3-cb90-b17b-283c6f8ad70b
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0187 | 1 | 2.0967 |
| 1.9689 | 0.1869 | 10 | 1.5523 |
| 1.1468 | 0.3738 | 20 | 1.0094 |
| 1.0525 | 0.5607 | 30 | 0.9176 |
| 0.9961 | 0.7477 | 40 | 0.8836 |
| 0.9734 | 0.9346 | 50 | 0.8773 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Shawon16/Timesformer_WLASL_100_200_epochs_p20_SR_16 | Shawon16 | 2025-01-09T05:36:44Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"timesformer",
"video-classification",
"generated_from_trainer",
"base_model:facebook/timesformer-base-finetuned-k400",
"base_model:finetune:facebook/timesformer-base-finetuned-k400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2025-01-09T05:35:37Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/timesformer-base-finetuned-k400
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: Timesformer_WLASL_100_200_epochs_p20_SR_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Timesformer_WLASL_100_200_epochs_p20_SR_16
This model is a fine-tuned version of [facebook/timesformer-base-finetuned-k400](https://huggingface.co/facebook/timesformer-base-finetuned-k400) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2599
- Top 1 Accuracy: 0.5828
- Top 5 Accuracy: 0.7899
- Top 10 Accuracy: 0.8698
- Accuracy: 0.5828
- Precision: 0.5806
- Recall: 0.5828
- F1: 0.5510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 36000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Top 1 Accuracy | Top 5 Accuracy | Top 10 Accuracy | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------------:|:--------------:|:---------------:|:--------:|:---------:|:------:|:------:|
| 19.1155 | 0.005 | 180 | 4.6927 | 0.0089 | 0.0414 | 0.0888 | 0.0089 | 0.0155 | 0.0089 | 0.0105 |
| 18.5538 | 1.0050 | 360 | 4.5821 | 0.0266 | 0.0769 | 0.1302 | 0.0266 | 0.0137 | 0.0266 | 0.0116 |
| 17.5848 | 2.0050 | 540 | 4.3988 | 0.0562 | 0.1450 | 0.2633 | 0.0562 | 0.0486 | 0.0562 | 0.0390 |
| 15.8283 | 3.0050 | 721 | 4.0516 | 0.1302 | 0.2959 | 0.4645 | 0.1302 | 0.1012 | 0.1302 | 0.0976 |
| 13.3102 | 4.005 | 901 | 3.6150 | 0.2249 | 0.4704 | 0.6154 | 0.2249 | 0.1781 | 0.2249 | 0.1741 |
| 11.2113 | 5.0050 | 1081 | 3.2389 | 0.2604 | 0.6065 | 0.7367 | 0.2604 | 0.2422 | 0.2604 | 0.2215 |
| 8.898 | 6.0050 | 1261 | 2.8714 | 0.3757 | 0.6775 | 0.8166 | 0.3757 | 0.3584 | 0.3757 | 0.3324 |
| 6.715 | 7.0050 | 1442 | 2.6518 | 0.4231 | 0.7249 | 0.8402 | 0.4231 | 0.3828 | 0.4231 | 0.3730 |
| 4.8442 | 8.005 | 1622 | 2.3294 | 0.4645 | 0.7929 | 0.8876 | 0.4645 | 0.5077 | 0.4645 | 0.4377 |
| 3.3825 | 9.0050 | 1802 | 2.1747 | 0.4911 | 0.7899 | 0.8964 | 0.4911 | 0.5436 | 0.4911 | 0.4654 |
| 2.0471 | 10.0050 | 1982 | 1.9990 | 0.5148 | 0.8107 | 0.9053 | 0.5178 | 0.5871 | 0.5178 | 0.5057 |
| 1.3242 | 11.0050 | 2163 | 1.8964 | 0.5473 | 0.8166 | 0.8935 | 0.5473 | 0.5822 | 0.5473 | 0.5199 |
| 0.8746 | 12.005 | 2343 | 1.8222 | 0.5562 | 0.8254 | 0.9083 | 0.5562 | 0.5796 | 0.5562 | 0.5320 |
| 0.5537 | 13.0050 | 2523 | 1.7525 | 0.5769 | 0.8343 | 0.9142 | 0.5769 | 0.5813 | 0.5769 | 0.5468 |
| 0.4081 | 14.0050 | 2703 | 1.7351 | 0.5947 | 0.8136 | 0.8964 | 0.5947 | 0.6684 | 0.5947 | 0.5834 |
| 0.17 | 15.0050 | 2884 | 1.6998 | 0.5592 | 0.8225 | 0.9083 | 0.5592 | 0.5763 | 0.5592 | 0.5342 |
| 0.2053 | 16.005 | 3064 | 1.7340 | 0.5651 | 0.8343 | 0.9083 | 0.5651 | 0.6215 | 0.5651 | 0.5390 |
| 0.1434 | 17.0050 | 3244 | 1.7350 | 0.6006 | 0.8432 | 0.9142 | 0.6006 | 0.6347 | 0.6006 | 0.5806 |
| 0.1957 | 18.0050 | 3424 | 1.8179 | 0.5621 | 0.8373 | 0.9142 | 0.5621 | 0.6060 | 0.5621 | 0.5350 |
| 0.1636 | 19.0050 | 3605 | 1.7831 | 0.6154 | 0.8225 | 0.8905 | 0.6154 | 0.6401 | 0.6154 | 0.5917 |
| 0.0908 | 20.005 | 3785 | 1.7552 | 0.6213 | 0.8402 | 0.9053 | 0.6213 | 0.6504 | 0.6213 | 0.6014 |
| 0.058 | 21.0050 | 3965 | 1.8422 | 0.6243 | 0.8254 | 0.9112 | 0.6213 | 0.6392 | 0.6213 | 0.5962 |
| 0.0924 | 22.0050 | 4145 | 1.8347 | 0.6006 | 0.8225 | 0.9201 | 0.6006 | 0.6218 | 0.6006 | 0.5735 |
| 0.0799 | 23.0050 | 4326 | 1.9650 | 0.6036 | 0.8107 | 0.8846 | 0.6036 | 0.6182 | 0.6036 | 0.5724 |
| 0.176 | 24.005 | 4506 | 1.9326 | 0.5858 | 0.8402 | 0.9142 | 0.5858 | 0.6240 | 0.5858 | 0.5671 |
| 0.0786 | 25.0050 | 4686 | 1.7753 | 0.6124 | 0.8491 | 0.9142 | 0.6124 | 0.6607 | 0.6124 | 0.5998 |
| 0.242 | 26.0050 | 4866 | 2.0219 | 0.5769 | 0.7722 | 0.8876 | 0.5769 | 0.6337 | 0.5769 | 0.5552 |
| 0.1767 | 27.0050 | 5047 | 1.9744 | 0.5828 | 0.8166 | 0.9024 | 0.5828 | 0.6330 | 0.5828 | 0.5721 |
| 0.14 | 28.005 | 5227 | 2.1996 | 0.5769 | 0.7811 | 0.8609 | 0.5769 | 0.5983 | 0.5769 | 0.5430 |
| 0.104 | 29.0050 | 5407 | 2.0881 | 0.5769 | 0.8166 | 0.8876 | 0.5769 | 0.6146 | 0.5769 | 0.5641 |
| 0.1454 | 30.0050 | 5587 | 2.3394 | 0.5621 | 0.7959 | 0.8905 | 0.5621 | 0.6280 | 0.5621 | 0.5448 |
| 0.2221 | 31.0050 | 5768 | 1.9360 | 0.5947 | 0.8225 | 0.9024 | 0.5947 | 0.6606 | 0.5947 | 0.5881 |
| 0.1026 | 32.005 | 5948 | 2.0920 | 0.6036 | 0.8107 | 0.8935 | 0.6036 | 0.6376 | 0.6036 | 0.5832 |
| 0.0968 | 33.0050 | 6128 | 2.2746 | 0.5740 | 0.8047 | 0.8846 | 0.5740 | 0.6308 | 0.5740 | 0.5542 |
| 0.1864 | 34.0050 | 6308 | 2.2081 | 0.5888 | 0.8047 | 0.8698 | 0.5888 | 0.6394 | 0.5888 | 0.5704 |
| 0.1353 | 35.0050 | 6489 | 2.1853 | 0.5799 | 0.8254 | 0.8935 | 0.5799 | 0.6133 | 0.5799 | 0.5636 |
| 0.1618 | 36.005 | 6669 | 2.2661 | 0.5710 | 0.7959 | 0.8698 | 0.5710 | 0.6243 | 0.5710 | 0.5515 |
| 0.259 | 37.0050 | 6849 | 2.3163 | 0.5740 | 0.7870 | 0.8580 | 0.5740 | 0.6088 | 0.5740 | 0.5459 |
| 0.3394 | 38.0050 | 7029 | 2.0984 | 0.5769 | 0.7988 | 0.8905 | 0.5769 | 0.6154 | 0.5769 | 0.5614 |
| 0.0833 | 39.0050 | 7210 | 2.2811 | 0.5533 | 0.8047 | 0.8698 | 0.5533 | 0.6051 | 0.5533 | 0.5328 |
| 0.1259 | 40.005 | 7390 | 2.2599 | 0.5828 | 0.7899 | 0.8698 | 0.5828 | 0.5806 | 0.5828 | 0.5510 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.1
|
Theros/L3-ColdBrew-CoT | Theros | 2025-01-09T05:32:24Z | 28 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:DreadPoor/Everything-COT-8B-r128-LoRA",
"base_model:merge:DreadPoor/Everything-COT-8B-r128-LoRA",
"base_model:SvalTek/L3-ColdBrew-SpicyReflect",
"base_model:merge:SvalTek/L3-ColdBrew-SpicyReflect",
"base_model:cgato/L3-TheSpice-8b-v0.8.3",
"base_model:merge:cgato/L3-TheSpice-8b-v0.8.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-09T05:28:03Z | ---
base_model:
- cgato/L3-TheSpice-8b-v0.8.3
- SvalTek/L3-ColdBrew-SpicyReflect
- SvalTek/L3-ColdBrew-SpicyReflect
- DreadPoor/Everything-COT-8B-r128-LoRA
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [SvalTek/L3-ColdBrew-SpicyReflect](https://huggingface.co/SvalTek/L3-ColdBrew-SpicyReflect) as a base.
### Models Merged
The following models were included in the merge:
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
* [SvalTek/L3-ColdBrew-SpicyReflect](https://huggingface.co/SvalTek/L3-ColdBrew-SpicyReflect) + [DreadPoor/Everything-COT-8B-r128-LoRA](https://huggingface.co/DreadPoor/Everything-COT-8B-r128-LoRA)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: SvalTek/L3-ColdBrew-SpicyReflect+DreadPoor/Everything-COT-8B-r128-LoRA
- model: cgato/L3-TheSpice-8b-v0.8.3
merge_method: model_stock
base_model: SvalTek/L3-ColdBrew-SpicyReflect
normalize: true
int8_mask: true
dtype: bfloat16
```
|
mradermacher/Magnum-DareLinearAbliterated-Instruct-DPO-GGUF | mradermacher | 2025-01-09T05:31:33Z | 179 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:rityak/Magnum-DareLinearAbliterated-Instruct-DPO",
"base_model:quantized:rityak/Magnum-DareLinearAbliterated-Instruct-DPO",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-08T21:17:48Z | ---
base_model: rityak/Magnum-DareLinearAbliterated-Instruct-DPO
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/rityak/Magnum-DareLinearAbliterated-Instruct-DPO
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Magnum-DareLinearAbliterated-Instruct-DPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Magnum-DareLinearAbliterated-Instruct-DPO-GGUF/resolve/main/Magnum-DareLinearAbliterated-Instruct-DPO.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum-DareLinearAbliterated-Instruct-DPO-GGUF/resolve/main/Magnum-DareLinearAbliterated-Instruct-DPO.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum-DareLinearAbliterated-Instruct-DPO-GGUF/resolve/main/Magnum-DareLinearAbliterated-Instruct-DPO.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Magnum-DareLinearAbliterated-Instruct-DPO-GGUF/resolve/main/Magnum-DareLinearAbliterated-Instruct-DPO.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum-DareLinearAbliterated-Instruct-DPO-GGUF/resolve/main/Magnum-DareLinearAbliterated-Instruct-DPO.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum-DareLinearAbliterated-Instruct-DPO-GGUF/resolve/main/Magnum-DareLinearAbliterated-Instruct-DPO.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Magnum-DareLinearAbliterated-Instruct-DPO-GGUF/resolve/main/Magnum-DareLinearAbliterated-Instruct-DPO.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Magnum-DareLinearAbliterated-Instruct-DPO-GGUF/resolve/main/Magnum-DareLinearAbliterated-Instruct-DPO.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum-DareLinearAbliterated-Instruct-DPO-GGUF/resolve/main/Magnum-DareLinearAbliterated-Instruct-DPO.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Magnum-DareLinearAbliterated-Instruct-DPO-GGUF/resolve/main/Magnum-DareLinearAbliterated-Instruct-DPO.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Magnum-DareLinearAbliterated-Instruct-DPO-GGUF/resolve/main/Magnum-DareLinearAbliterated-Instruct-DPO.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Vikhr-Nemo-dostoevsky-saiga-12b-GGUF | mradermacher | 2025-01-09T05:31:04Z | 398 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:rityak/Vikhr-Nemo-dostoevsky-saiga-12b",
"base_model:quantized:rityak/Vikhr-Nemo-dostoevsky-saiga-12b",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-08T20:54:47Z | ---
base_model: rityak/Vikhr-Nemo-dostoevsky-saiga-12b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/rityak/Vikhr-Nemo-dostoevsky-saiga-12b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Vikhr-Nemo-dostoevsky-saiga-12b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Nemo-dostoevsky-saiga-12b-GGUF/resolve/main/Vikhr-Nemo-dostoevsky-saiga-12b.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Nemo-dostoevsky-saiga-12b-GGUF/resolve/main/Vikhr-Nemo-dostoevsky-saiga-12b.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Nemo-dostoevsky-saiga-12b-GGUF/resolve/main/Vikhr-Nemo-dostoevsky-saiga-12b.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Nemo-dostoevsky-saiga-12b-GGUF/resolve/main/Vikhr-Nemo-dostoevsky-saiga-12b.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Nemo-dostoevsky-saiga-12b-GGUF/resolve/main/Vikhr-Nemo-dostoevsky-saiga-12b.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Nemo-dostoevsky-saiga-12b-GGUF/resolve/main/Vikhr-Nemo-dostoevsky-saiga-12b.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Nemo-dostoevsky-saiga-12b-GGUF/resolve/main/Vikhr-Nemo-dostoevsky-saiga-12b.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Nemo-dostoevsky-saiga-12b-GGUF/resolve/main/Vikhr-Nemo-dostoevsky-saiga-12b.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Nemo-dostoevsky-saiga-12b-GGUF/resolve/main/Vikhr-Nemo-dostoevsky-saiga-12b.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Nemo-dostoevsky-saiga-12b-GGUF/resolve/main/Vikhr-Nemo-dostoevsky-saiga-12b.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Vikhr-Nemo-dostoevsky-saiga-12b-GGUF/resolve/main/Vikhr-Nemo-dostoevsky-saiga-12b.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
i-dhilip/csv_with_des_onnx | i-dhilip | 2025-01-09T05:29:52Z | 12 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-01-08T12:13:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
trevorkwan/biomed_bert_squadv2 | trevorkwan | 2025-01-09T05:29:00Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2025-01-08T18:37:33Z | ---
library_name: transformers
license: mit
base_model: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: biomed_bert_squadv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biomed_bert_squadv2
This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 3.0.1
- Tokenizers 0.21.0
|
context-labs/Meta-Llama-3.1-8B-Instruct-FP16 | context-labs | 2025-01-09T05:27:19Z | 13 | 0 | null | [
"safetensors",
"llama",
"facebook",
"meta",
"pytorch",
"llama-3",
"text-generation",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
]
| text-generation | 2025-01-09T05:01:50Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\
\ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\
\ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\
\ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\
\ create derivative works of, and make modifications to the Llama Materials.\nb.\
\ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\
\ (or any derivative works thereof), or a product or service (including another\
\ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\
\ with any such Llama Materials; and (B) prominently display “Built with Llama”\
\ on a related website, user interface, blogpost, about page, or product documentation.\
\ If you use the Llama Materials or any outputs or results of the Llama Materials\
\ to create, train, fine tune, or otherwise improve an AI model, which is distributed\
\ or made available, you shall also include “Llama” at the beginning of any such\
\ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\
\ from a Licensee as part of an integrated end user product, then Section 2 of\
\ this Agreement will not apply to you.\niii. You must retain in all copies of the\
\ Llama Materials that you distribute the following attribution notice within a\
\ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\
\ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\
\ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\
\ and regulations (including trade compliance laws and regulations) and adhere to\
\ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\
\ which is hereby incorporated by reference into this Agreement.\n2. Additional\
\ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\
\ users of the products or services made available by or for Licensee, or Licensee’s\
\ affiliates, is greater than 700 million monthly active users in the preceding\
\ calendar month, you must request a license from Meta, which Meta may grant to\
\ you in its sole discretion, and you are not authorized to exercise any of the\
\ rights under this Agreement unless or until Meta otherwise expressly grants you\
\ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\
\ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\
\ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\
\ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\
\ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\
\ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\
\ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\
\ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\
\ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\
\ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\
\ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\
\ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\
\ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\
\ trademark licenses are granted under this Agreement, and in connection with the\
\ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\
\ associated with the other or any of its affiliates, except as required for reasonable\
\ and customary use in describing and redistributing the Llama Materials or as set\
\ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\
\ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\
\ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\
\ ). All goodwill arising out of your use of the Mark will inure to the benefit\
\ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\
\ by or for Meta, with respect to any derivative works and modifications of the\
\ Llama Materials that are made by you, as between you and Meta, you are and will\
\ be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\
\ or results, or any portion of any of the foregoing, constitutes infringement of\
\ intellectual property or other rights owned or licensable by you, then any licenses\
\ granted to you under this Agreement shall terminate as of the date such litigation\
\ or claim is filed or instituted. You will indemnify and hold harmless Meta from\
\ and against any claim by any third party arising out of or related to your use\
\ or distribution of the Llama Materials.\n6. Term and Termination. The term of\
\ this Agreement will commence upon your acceptance of this Agreement or access\
\ to the Llama Materials and will continue in full force and effect until terminated\
\ in accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\
\ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 5.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 7. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 8. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\
\ 6. Generating or facilitating false online engagement, including fake reviews\
\ and other means of fake online engagement\n4. Fail to appropriately disclose to\
\ end users any known dangers of your AI system\nPlease report any violation of\
\ this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Input modalities</strong>
</td>
<td><strong>Output modalities</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="3" >Llama 3.1 (text only)
</td>
<td rowspan="3" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
<td rowspan="3" >15T+
</td>
<td rowspan="3" >December 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
</tr>
<tr>
<td>405B
</td>
<td>Multilingual Text
</td>
<td>Multilingual Text and code
</td>
<td>128k
</td>
<td>Yes
</td>
</tr>
</table>
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
**Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** July 23, 2024.
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
**<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner.
## How to use
This repository contains two versions of Meta-Llama-3.1-8B-Instruct, for use with transformers and with the original `llama` codebase.
### Use with transformers
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
```python
import transformers
import torch
model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
### Tool use with transformers
LLaMA-3.1 supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/).
Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers.
Here is a quick example showing a single simple tool:
```python
# First, define a tool
def get_current_temperature(location: str) -> float:
"""
Get the current temperature at a location.
Args:
location: The location to get the temperature for, in the format "City, Country"
Returns:
The current temperature at the specified location in the specified units, as a float.
"""
return 22. # A real function should probably actually get the temperature!
# Next, create a chat and apply the chat template
messages = [
{"role": "system", "content": "You are a bot that responds to weather queries."},
{"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
]
inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True)
```
You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so:
```python
tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})
```
and then call the tool and append the result, with the `tool` role, like so:
```python
messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})
```
After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information,
see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling).
### Use with `llama`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
```
huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-8B-Instruct
```
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
**Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
<table>
<tr>
<td>
</td>
<td><strong>Training Time (GPU hours)</strong>
</td>
<td><strong>Training Power Consumption (W)</strong>
</td>
<td><strong>Training Location-Based Greenhouse Gas Emissions</strong>
<p>
<strong>(tons CO2eq)</strong>
</td>
<td><strong>Training Market-Based Greenhouse Gas Emissions</strong>
<p>
<strong>(tons CO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3.1 8B
</td>
<td>1.46M
</td>
<td>700
</td>
<td>420
</td>
<td>0
</td>
</tr>
<tr>
<td>Llama 3.1 70B
</td>
<td>7.0M
</td>
<td>700
</td>
<td>2,040
</td>
<td>0
</td>
</tr>
<tr>
<td>Llama 3.1 405B
</td>
<td>30.84M
</td>
<td>700
</td>
<td>8,930
</td>
<td>0
</td>
</tr>
<tr>
<td>Total
</td>
<td>39.3M
<td>
<ul>
</ul>
</td>
<td>11,390
</td>
<td>0
</td>
</tr>
</table>
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples.
**Data Freshness:** The pretraining data has a cutoff of December 2023.
## Benchmark scores
In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library.
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong># Shots</strong>
</td>
<td><strong>Metric</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 3.1 8B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 3.1 70B</strong>
</td>
<td><strong>Llama 3.1 405B</strong>
</td>
</tr>
<tr>
<td rowspan="7" >General
</td>
<td>MMLU
</td>
<td>5
</td>
<td>macro_avg/acc_char
</td>
<td>66.7
</td>
<td>66.7
</td>
<td>79.5
</td>
<td>79.3
</td>
<td>85.2
</td>
</tr>
<tr>
<td>MMLU-Pro (CoT)
</td>
<td>5
</td>
<td>macro_avg/acc_char
</td>
<td>36.2
</td>
<td>37.1
</td>
<td>55.0
</td>
<td>53.8
</td>
<td>61.6
</td>
</tr>
<tr>
<td>AGIEval English
</td>
<td>3-5
</td>
<td>average/acc_char
</td>
<td>47.1
</td>
<td>47.8
</td>
<td>63.0
</td>
<td>64.6
</td>
<td>71.6
</td>
</tr>
<tr>
<td>CommonSenseQA
</td>
<td>7
</td>
<td>acc_char
</td>
<td>72.6
</td>
<td>75.0
</td>
<td>83.8
</td>
<td>84.1
</td>
<td>85.8
</td>
</tr>
<tr>
<td>Winogrande
</td>
<td>5
</td>
<td>acc_char
</td>
<td>-
</td>
<td>60.5
</td>
<td>-
</td>
<td>83.3
</td>
<td>86.7
</td>
</tr>
<tr>
<td>BIG-Bench Hard (CoT)
</td>
<td>3
</td>
<td>average/em
</td>
<td>61.1
</td>
<td>64.2
</td>
<td>81.3
</td>
<td>81.6
</td>
<td>85.9
</td>
</tr>
<tr>
<td>ARC-Challenge
</td>
<td>25
</td>
<td>acc_char
</td>
<td>79.4
</td>
<td>79.7
</td>
<td>93.1
</td>
<td>92.9
</td>
<td>96.1
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki
</td>
<td>5
</td>
<td>em
</td>
<td>78.5
</td>
<td>77.6
</td>
<td>89.7
</td>
<td>89.8
</td>
<td>91.8
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD
</td>
<td>1
</td>
<td>em
</td>
<td>76.4
</td>
<td>77.0
</td>
<td>85.6
</td>
<td>81.8
</td>
<td>89.3
</td>
</tr>
<tr>
<td>QuAC (F1)
</td>
<td>1
</td>
<td>f1
</td>
<td>44.4
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>51.1
</td>
<td>53.6
</td>
</tr>
<tr>
<td>BoolQ
</td>
<td>0
</td>
<td>acc_char
</td>
<td>75.7
</td>
<td>75.0
</td>
<td>79.0
</td>
<td>79.4
</td>
<td>80.0
</td>
</tr>
<tr>
<td>DROP (F1)
</td>
<td>3
</td>
<td>f1
</td>
<td>58.4
</td>
<td>59.5
</td>
<td>79.7
</td>
<td>79.6
</td>
<td>84.8
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong># Shots</strong>
</td>
<td><strong>Metric</strong>
</td>
<td><strong>Llama 3 8B Instruct</strong>
</td>
<td><strong>Llama 3.1 8B Instruct</strong>
</td>
<td><strong>Llama 3 70B Instruct</strong>
</td>
<td><strong>Llama 3.1 70B Instruct</strong>
</td>
<td><strong>Llama 3.1 405B Instruct</strong>
</td>
</tr>
<tr>
<td rowspan="4" >General
</td>
<td>MMLU
</td>
<td>5
</td>
<td>macro_avg/acc
</td>
<td>68.5
</td>
<td>69.4
</td>
<td>82.0
</td>
<td>83.6
</td>
<td>87.3
</td>
</tr>
<tr>
<td>MMLU (CoT)
</td>
<td>0
</td>
<td>macro_avg/acc
</td>
<td>65.3
</td>
<td>73.0
</td>
<td>80.9
</td>
<td>86.0
</td>
<td>88.6
</td>
</tr>
<tr>
<td>MMLU-Pro (CoT)
</td>
<td>5
</td>
<td>micro_avg/acc_char
</td>
<td>45.5
</td>
<td>48.3
</td>
<td>63.4
</td>
<td>66.4
</td>
<td>73.3
</td>
</tr>
<tr>
<td>IFEval
</td>
<td>
</td>
<td>
</td>
<td>76.8
</td>
<td>80.4
</td>
<td>82.9
</td>
<td>87.5
</td>
<td>88.6
</td>
</tr>
<tr>
<td rowspan="2" >Reasoning
</td>
<td>ARC-C
</td>
<td>0
</td>
<td>acc
</td>
<td>82.4
</td>
<td>83.4
</td>
<td>94.4
</td>
<td>94.8
</td>
<td>96.9
</td>
</tr>
<tr>
<td>GPQA
</td>
<td>0
</td>
<td>em
</td>
<td>34.6
</td>
<td>30.4
</td>
<td>39.5
</td>
<td>46.7
</td>
<td>50.7
</td>
</tr>
<tr>
<td rowspan="4" >Code
</td>
<td>HumanEval
</td>
<td>0
</td>
<td>pass@1
</td>
<td>60.4
</td>
<td>72.6
</td>
<td>81.7
</td>
<td>80.5
</td>
<td>89.0
</td>
</tr>
<tr>
<td>MBPP ++ base version
</td>
<td>0
</td>
<td>pass@1
</td>
<td>70.6
</td>
<td>72.8
</td>
<td>82.5
</td>
<td>86.0
</td>
<td>88.6
</td>
</tr>
<tr>
<td>Multipl-E HumanEval
</td>
<td>0
</td>
<td>pass@1
</td>
<td>-
</td>
<td>50.8
</td>
<td>-
</td>
<td>65.5
</td>
<td>75.2
</td>
</tr>
<tr>
<td>Multipl-E MBPP
</td>
<td>0
</td>
<td>pass@1
</td>
<td>-
</td>
<td>52.4
</td>
<td>-
</td>
<td>62.0
</td>
<td>65.7
</td>
</tr>
<tr>
<td rowspan="2" >Math
</td>
<td>GSM-8K (CoT)
</td>
<td>8
</td>
<td>em_maj1@1
</td>
<td>80.6
</td>
<td>84.5
</td>
<td>93.0
</td>
<td>95.1
</td>
<td>96.8
</td>
</tr>
<tr>
<td>MATH (CoT)
</td>
<td>0
</td>
<td>final_em
</td>
<td>29.1
</td>
<td>51.9
</td>
<td>51.0
</td>
<td>68.0
</td>
<td>73.8
</td>
</tr>
<tr>
<td rowspan="4" >Tool Use
</td>
<td>API-Bank
</td>
<td>0
</td>
<td>acc
</td>
<td>48.3
</td>
<td>82.6
</td>
<td>85.1
</td>
<td>90.0
</td>
<td>92.0
</td>
</tr>
<tr>
<td>BFCL
</td>
<td>0
</td>
<td>acc
</td>
<td>60.3
</td>
<td>76.1
</td>
<td>83.0
</td>
<td>84.8
</td>
<td>88.5
</td>
</tr>
<tr>
<td>Gorilla Benchmark API Bench
</td>
<td>0
</td>
<td>acc
</td>
<td>1.7
</td>
<td>8.2
</td>
<td>14.7
</td>
<td>29.7
</td>
<td>35.3
</td>
</tr>
<tr>
<td>Nexus (0-shot)
</td>
<td>0
</td>
<td>macro_avg/acc
</td>
<td>18.1
</td>
<td>38.5
</td>
<td>47.8
</td>
<td>56.7
</td>
<td>58.7
</td>
</tr>
<tr>
<td>Multilingual
</td>
<td>Multilingual MGSM (CoT)
</td>
<td>0
</td>
<td>em
</td>
<td>-
</td>
<td>68.9
</td>
<td>-
</td>
<td>86.9
</td>
<td>91.6
</td>
</tr>
</table>
#### Multilingual benchmarks
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Language</strong>
</td>
<td><strong>Llama 3.1 8B</strong>
</td>
<td><strong>Llama 3.1 70B</strong>
</td>
<td><strong>Llama 3.1 405B</strong>
</td>
</tr>
<tr>
<td rowspan="9" ><strong>General</strong>
</td>
<td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong>
</td>
<td>Portuguese
</td>
<td>62.12
</td>
<td>80.13
</td>
<td>84.95
</td>
</tr>
<tr>
<td>Spanish
</td>
<td>62.45
</td>
<td>80.05
</td>
<td>85.08
</td>
</tr>
<tr>
<td>Italian
</td>
<td>61.63
</td>
<td>80.4
</td>
<td>85.04
</td>
</tr>
<tr>
<td>German
</td>
<td>60.59
</td>
<td>79.27
</td>
<td>84.36
</td>
</tr>
<tr>
<td>French
</td>
<td>62.34
</td>
<td>79.82
</td>
<td>84.66
</td>
</tr>
<tr>
<td>Hindi
</td>
<td>50.88
</td>
<td>74.52
</td>
<td>80.31
</td>
</tr>
<tr>
<td>Thai
</td>
<td>50.32
</td>
<td>72.95
</td>
<td>78.21
</td>
</tr>
</table>
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
* Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
* Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
* Provide protections for the community to help prevent the misuse of our models.
### Responsible deployment
Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more.
#### Llama 3.1 instruct
Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper.
**Fine-tuning data**
We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone**
Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.1 systems
**Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools.
As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
#### New capabilities
Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases.
**Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards.
**Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide.
### Evaluations
We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application.
Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization.
**Red teaming**
For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets.
We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical and other risks
We specifically focused our efforts on mitigating the following critical risk areas:
**1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness**
To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons.
**2. Child Safety**
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3. Cyber attack enablement**
Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention.
Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development. |
NOTKNOWN/big5_llama_openness_low | NOTKNOWN | 2025-01-09T05:24:57Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-09T05:12:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Theros/L3-ColdBrew-Daybreak | Theros | 2025-01-09T05:21:14Z | 30 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Azazelle/L3-Daybreak-8b-lora",
"base_model:merge:Azazelle/L3-Daybreak-8b-lora",
"base_model:SvalTek/L3-ColdBrew-SpicyReflect",
"base_model:merge:SvalTek/L3-ColdBrew-SpicyReflect",
"base_model:cgato/L3-TheSpice-8b-v0.8.3",
"base_model:merge:cgato/L3-TheSpice-8b-v0.8.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-09T05:16:52Z | ---
base_model:
- SvalTek/L3-ColdBrew-SpicyReflect
- Azazelle/L3-Daybreak-8b-lora
- cgato/L3-TheSpice-8b-v0.8.3
- SvalTek/L3-ColdBrew-SpicyReflect
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [SvalTek/L3-ColdBrew-SpicyReflect](https://huggingface.co/SvalTek/L3-ColdBrew-SpicyReflect) as a base.
### Models Merged
The following models were included in the merge:
* [SvalTek/L3-ColdBrew-SpicyReflect](https://huggingface.co/SvalTek/L3-ColdBrew-SpicyReflect) + [Azazelle/L3-Daybreak-8b-lora](https://huggingface.co/Azazelle/L3-Daybreak-8b-lora)
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: SvalTek/L3-ColdBrew-SpicyReflect+Azazelle/L3-Daybreak-8b-lora
- model: cgato/L3-TheSpice-8b-v0.8.3
merge_method: model_stock
base_model: SvalTek/L3-ColdBrew-SpicyReflect
normalize: true
int8_mask: true
dtype: bfloat16
```
|
John6666/void-mix-v25spo-sdxl | John6666 | 2025-01-09T05:16:37Z | 136 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"cyber realistic",
"void mix",
"new reality",
"anatomy",
"detailing",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2025-01-09T05:10:33Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- cyber realistic
- void mix
- new reality
- anatomy
- detailing
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1035414?modelVersionId=1258609).
This model created by [voider](https://civitai.com/user/voider).
|
BlackLens/ANLX | BlackLens | 2025-01-09T05:11:19Z | 9 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-01-09T04:29:39Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ANLX
---
# Anlx
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ANLX` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BlackLens/ANLX', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Varun-Ponugoti/STFlorence2 | Varun-Ponugoti | 2025-01-09T05:11:12Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
]
| text-generation | 2025-01-09T04:54:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tuanna08go/11933580-2754-1882-57d1-c50f32f11757 | tuanna08go | 2025-01-09T05:10:46Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-0.5B-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-01-09T04:24:28Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 11933580-2754-1882-57d1-c50f32f11757
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 734f9ed12aa4506d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/734f9ed12aa4506d_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/11933580-2754-1882-57d1-c50f32f11757
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/734f9ed12aa4506d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b9edf436-374c-4212-90cf-6bf4bd84dfd5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b9edf436-374c-4212-90cf-6bf4bd84dfd5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 11933580-2754-1882-57d1-c50f32f11757
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | nan |
| 0.0 | 0.0004 | 10 | nan |
| 0.0 | 0.0008 | 20 | nan |
| 0.0 | 0.0013 | 30 | nan |
| 0.0 | 0.0017 | 40 | nan |
| 0.0 | 0.0021 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
devhem/LLMGUARD | devhem | 2025-01-09T05:09:08Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-01-09T02:59:36Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: LLMGUARD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLMGUARD
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6730
- Accuracy: 0.7628
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 32
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.2334 | 1.0 | 876 | 1.8027 | 0.4071 |
| 1.6018 | 2.0 | 1752 | 1.1836 | 0.6644 |
| 0.9703 | 3.0 | 2628 | 0.8345 | 0.7433 |
| 0.7557 | 4.0 | 3504 | 0.7281 | 0.7591 |
| 0.7028 | 5.0 | 4380 | 0.6809 | 0.7717 |
| 0.6372 | 6.0 | 5256 | 0.6530 | 0.7768 |
| 0.6074 | 7.0 | 6132 | 0.6411 | 0.7787 |
| 0.5809 | 8.0 | 7008 | 0.6292 | 0.7785 |
| 0.5594 | 9.0 | 7884 | 0.6255 | 0.7832 |
| 0.5452 | 10.0 | 8760 | 0.6334 | 0.7797 |
| 0.5334 | 11.0 | 9636 | 0.6225 | 0.7761 |
| 0.5091 | 12.0 | 10512 | 0.6347 | 0.7734 |
| 0.493 | 13.0 | 11388 | 0.6217 | 0.7794 |
| 0.4883 | 14.0 | 12264 | 0.6259 | 0.7782 |
| 0.4746 | 15.0 | 13140 | 0.6265 | 0.7725 |
| 0.4698 | 16.0 | 14016 | 0.6351 | 0.7728 |
| 0.4531 | 17.0 | 14892 | 0.6401 | 0.7734 |
| 0.4579 | 18.0 | 15768 | 0.6435 | 0.7731 |
| 0.4412 | 19.0 | 16644 | 0.6391 | 0.7710 |
| 0.4377 | 20.0 | 17520 | 0.6432 | 0.7705 |
| 0.4362 | 21.0 | 18396 | 0.6500 | 0.7681 |
| 0.4269 | 22.0 | 19272 | 0.6541 | 0.7674 |
| 0.4227 | 23.0 | 20148 | 0.6555 | 0.7658 |
| 0.4196 | 24.0 | 21024 | 0.6569 | 0.7678 |
| 0.4216 | 25.0 | 21900 | 0.6608 | 0.7660 |
| 0.4107 | 26.0 | 22776 | 0.6651 | 0.7672 |
| 0.4118 | 27.0 | 23652 | 0.6629 | 0.7645 |
| 0.4054 | 28.0 | 24528 | 0.6685 | 0.7624 |
| 0.4112 | 29.0 | 25404 | 0.6705 | 0.7642 |
| 0.3999 | 30.0 | 26280 | 0.6724 | 0.7625 |
| 0.405 | 31.0 | 27156 | 0.6721 | 0.7628 |
| 0.394 | 32.0 | 28032 | 0.6730 | 0.7628 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
zxboo/task-1-Qwen-Qwen1.5-1.8B | zxboo | 2025-01-09T05:07:19Z | 168 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
]
| null | 2025-01-07T15:54:21Z | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Ashton2000/checkpoints | Ashton2000 | 2025-01-09T05:06:46Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-07T08:57:28Z | ---
library_name: transformers
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints
This model was trained from scratch on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.47.1
- Pytorch 2.1.2+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
pengxiang/TrackDiffusion_SVD_Stage2 | pengxiang | 2025-01-09T05:06:29Z | 0 | 0 | null | [
"text-to-video",
"license:other",
"region:us"
]
| text-to-video | 2024-04-08T08:12:22Z | ---
pipeline_tag: text-to-video
license: other
license_link: LICENSE
---
# TrackDiffusion Model Card
Please download the weights from this link(https://huggingface.co/pengxiang/trackdiffusion_ytvis).
<!-- Provide a quick summary of what the model is/does. -->
TrackDiffusion is a diffusion model that takes in tracklets as conditions, and generates a video from it.

## Model Details
### Model Description
TrackDiffusion is a novel video generation framework that enables fine-grained control over complex dynamics in video synthesis by conditioning the generation process on object trajectories.
This approach allows for precise manipulation of object trajectories and interactions, addressing the challenges of managing appearance, disappearance, scale changes, and ensuring consistency across frames.
## Uses
### Direct Use
We provide the weights for the entire unet, so you can replace it in diffusers pipeline, for example:
```python
pretrained_model_path = "stabilityai/stable-video-diffusion-img2vid"
unet = UNetSpatioTemporalConditionModel.from_pretrained("/path/to/unet", torch_dtype=torch.float16,)
pipe = StableVideoDiffusionPipeline.from_pretrained(
pretrained_model_path,
unet=unet,
torch_dtype=torch.float16,
variant="fp16",
low_cpu_mem_usage=True)
```
|
zxboo/task-1-Qwen-Qwen1.5-0.5B | zxboo | 2025-01-09T05:06:00Z | 177 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
]
| null | 2025-01-07T15:52:25Z | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
Theros/L3-ColdBrew-Altair-test-Q4_K_M-GGUF | Theros | 2025-01-09T05:03:49Z | 22 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Theros/L3-ColdBrew-Altair-test",
"base_model:quantized:Theros/L3-ColdBrew-Altair-test",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-09T05:03:25Z | ---
base_model: Theros/L3-ColdBrew-Altair-test
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Theros/L3-ColdBrew-Altair-test-Q4_K_M-GGUF
This model was converted to GGUF format from [`Theros/L3-ColdBrew-Altair-test`](https://huggingface.co/Theros/L3-ColdBrew-Altair-test) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Theros/L3-ColdBrew-Altair-test) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Theros/L3-ColdBrew-Altair-test-Q4_K_M-GGUF --hf-file l3-coldbrew-altair-test-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Theros/L3-ColdBrew-Altair-test-Q4_K_M-GGUF --hf-file l3-coldbrew-altair-test-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Theros/L3-ColdBrew-Altair-test-Q4_K_M-GGUF --hf-file l3-coldbrew-altair-test-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Theros/L3-ColdBrew-Altair-test-Q4_K_M-GGUF --hf-file l3-coldbrew-altair-test-q4_k_m.gguf -c 2048
```
|
duyphu/1efce3ec-84d9-e88c-ca24-58b931478c1b | duyphu | 2025-01-09T05:02:48Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/gemma-2-9b-it-SimPO",
"base_model:adapter:princeton-nlp/gemma-2-9b-it-SimPO",
"license:mit",
"region:us"
]
| null | 2025-01-09T03:32:04Z | ---
library_name: peft
license: mit
base_model: princeton-nlp/gemma-2-9b-it-SimPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1efce3ec-84d9-e88c-ca24-58b931478c1b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/gemma-2-9b-it-SimPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b686ac23330f53a6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b686ac23330f53a6_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/1efce3ec-84d9-e88c-ca24-58b931478c1b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/b686ac23330f53a6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a3354f0e-cd87-4819-a32a-8cc6640c9c0d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a3354f0e-cd87-4819-a32a-8cc6640c9c0d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1efce3ec-84d9-e88c-ca24-58b931478c1b
This model is a fine-tuned version of [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 6.5144 |
| 4.5793 | 0.0006 | 10 | 3.3265 |
| 1.9241 | 0.0012 | 20 | 1.8077 |
| 1.6689 | 0.0018 | 30 | 1.6964 |
| 1.6375 | 0.0024 | 40 | 1.6703 |
| 1.636 | 0.0030 | 50 | 1.6652 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
exala/db_mc2_11.2 | exala | 2025-01-09T05:01:07Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-01-09T05:00:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chauhoang/6d9eb868-74b1-86ec-1aa3-9ae92576d4cb | chauhoang | 2025-01-09T04:52:30Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-0.5B",
"base_model:adapter:Qwen/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
]
| null | 2025-01-09T03:27:40Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6d9eb868-74b1-86ec-1aa3-9ae92576d4cb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- df935eb598f45854_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/df935eb598f45854_train_data.json
type:
field_instruction: text
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/6d9eb868-74b1-86ec-1aa3-9ae92576d4cb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/df935eb598f45854_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c91c4acc-158c-4e1a-973c-9c1c0095d374
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c91c4acc-158c-4e1a-973c-9c1c0095d374
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6d9eb868-74b1-86ec-1aa3-9ae92576d4cb
This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 4.9054 |
| 4.6796 | 0.0002 | 10 | 4.6405 |
| 4.3232 | 0.0004 | 20 | 3.8615 |
| 3.6539 | 0.0007 | 30 | 3.5444 |
| 3.1787 | 0.0009 | 40 | 3.4470 |
| 3.3191 | 0.0011 | 50 | 3.4339 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bryanculturit/code-llama-7b-text-to-sql | bryanculturit | 2025-01-09T04:50:26Z | 6 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
]
| null | 2025-01-04T12:01:32Z | ---
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
library_name: peft
model-index:
- name: code-llama-7b-text-to-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-text-to-sql
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 |
mradermacher/Phi-4-jackterated-i1-GGUF | mradermacher | 2025-01-09T04:49:50Z | 935 | 0 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"en",
"base_model:JackCloudman/Phi-4-jackterated",
"base_model:quantized:JackCloudman/Phi-4-jackterated",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-01-09T02:50:31Z | ---
base_model: JackCloudman/Phi-4-jackterated
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- abliterated
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/JackCloudman/Phi-4-jackterated
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Phi-4-jackterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-IQ1_S.gguf) | i1-IQ1_S | 3.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-IQ1_M.gguf) | i1-IQ1_M | 3.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-IQ2_S.gguf) | i1-IQ2_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-IQ2_M.gguf) | i1-IQ2_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-Q2_K.gguf) | i1-Q2_K | 5.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-IQ3_S.gguf) | i1-IQ3_S | 6.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-Q4_0.gguf) | i1-Q4_0 | 8.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-Q4_1.gguf) | i1-Q4_1 | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF/resolve/main/Phi-4-jackterated.i1-Q6_K.gguf) | i1-Q6_K | 12.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nbninh/44d17704-a789-4f89-a03c-cbd05e4587af | nbninh | 2025-01-09T04:48:12Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-01-09T04:36:27Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 44d17704-a789-4f89-a03c-cbd05e4587af
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 647c971518845557_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/647c971518845557_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/44d17704-a789-4f89-a03c-cbd05e4587af
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/647c971518845557_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1c20bf9a-004b-40e9-b897-153b5b65d1aa
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1c20bf9a-004b-40e9-b897-153b5b65d1aa
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 44d17704-a789-4f89-a03c-cbd05e4587af
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2965 | 0.0326 | 200 | 1.4801 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso03/337ff9da-e295-4e32-a881-eee4250c4f79 | lesso03 | 2025-01-09T04:47:04Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-01-09T01:03:12Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 337ff9da-e295-4e32-a881-eee4250c4f79
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- 0a48468562cffe67_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0a48468562cffe67_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso03/337ff9da-e295-4e32-a881-eee4250c4f79
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 1.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/0a48468562cffe67_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 20
save_strategy: steps
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ea29f12f-d5f0-4cba-ae1b-d7c84b62adab
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ea29f12f-d5f0-4cba-ae1b-d7c84b62adab
warmup_steps: 5
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# 337ff9da-e295-4e32-a881-eee4250c4f79
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 4 | nan |
| 0.0 | 0.0001 | 8 | nan |
| 0.0 | 0.0002 | 12 | nan |
| 0.0 | 0.0003 | 16 | nan |
| 0.0 | 0.0004 | 20 | nan |
| 0.0 | 0.0004 | 24 | nan |
| 0.0 | 0.0005 | 28 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Nexspear/a8ec8952-c2d3-4de0-92b4-7d95f88d7cc7 | Nexspear | 2025-01-09T04:45:31Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama-chat",
"base_model:adapter:unsloth/tinyllama-chat",
"license:apache-2.0",
"region:us"
]
| null | 2025-01-09T04:31:32Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a8ec8952-c2d3-4de0-92b4-7d95f88d7cc7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama-chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d0d971e5e168f685_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d0d971e5e168f685_train_data.json
type:
field_input: distraction
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Nexspear/a8ec8952-c2d3-4de0-92b4-7d95f88d7cc7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d0d971e5e168f685_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: leixa-personal
wandb_mode: online
wandb_name: a8ec8952-c2d3-4de0-92b4-7d95f88d7cc7
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: a8ec8952-c2d3-4de0-92b4-7d95f88d7cc7
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# a8ec8952-c2d3-4de0-92b4-7d95f88d7cc7
This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.1173 |
| 2.3683 | 0.0034 | 50 | 2.3304 |
| 2.2206 | 0.0068 | 100 | 2.2709 |
| 2.1521 | 0.0102 | 150 | 2.2568 |
| 2.1816 | 0.0137 | 200 | 2.2534 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bbytxt/1c20bf9a-004b-40e9-b897-153b5b65d1aa | bbytxt | 2025-01-09T04:44:08Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
]
| null | 2025-01-09T04:36:11Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1c20bf9a-004b-40e9-b897-153b5b65d1aa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 647c971518845557_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/647c971518845557_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: bbytxt/1c20bf9a-004b-40e9-b897-153b5b65d1aa
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/647c971518845557_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1c20bf9a-004b-40e9-b897-153b5b65d1aa
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1c20bf9a-004b-40e9-b897-153b5b65d1aa
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1c20bf9a-004b-40e9-b897-153b5b65d1aa
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.8134 |
| 2.4445 | 0.0081 | 50 | 1.5701 |
| 2.1646 | 0.0163 | 100 | 1.5066 |
| 2.0061 | 0.0244 | 150 | 1.4818 |
| 1.7573 | 0.0326 | 200 | 1.4778 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik1987/5db9624f-b68d-4a17-b13d-4731d263a258 | dimasik1987 | 2025-01-09T04:43:15Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
]
| null | 2025-01-09T04:36:26Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5db9624f-b68d-4a17-b13d-4731d263a258
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 647c971518845557_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/647c971518845557_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dimasik1987/5db9624f-b68d-4a17-b13d-4731d263a258
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/647c971518845557_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1c20bf9a-004b-40e9-b897-153b5b65d1aa
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1c20bf9a-004b-40e9-b897-153b5b65d1aa
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5db9624f-b68d-4a17-b13d-4731d263a258
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.6814 |
| 1.7618 | 0.0013 | 8 | 1.6409 |
| 1.6398 | 0.0026 | 16 | 1.5392 |
| 1.4514 | 0.0039 | 24 | 1.5170 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hongngo/619da769-33b0-4b70-bc6f-479e23002066 | hongngo | 2025-01-09T04:41:50Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-01-09T04:40:00Z | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 619da769-33b0-4b70-bc6f-479e23002066
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-68m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7e8c476129b7ae82_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7e8c476129b7ae82_train_data.json
type:
field_input: ''
field_instruction: text
field_output: inputs.output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/619da769-33b0-4b70-bc6f-479e23002066
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/7e8c476129b7ae82_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ff72b1e5-8663-40b2-acd7-441583314975
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ff72b1e5-8663-40b2-acd7-441583314975
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 619da769-33b0-4b70-bc6f-479e23002066
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0327 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/MT-BM-gemma-2-9B-GGUF | mradermacher | 2025-01-09T04:40:37Z | 271 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT-BM-gemma-2-9B",
"base_model:quantized:zelk12/MT-BM-gemma-2-9B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-09T03:51:09Z | ---
base_model: zelk12/MT-BM-gemma-2-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/zelk12/MT-BM-gemma-2-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT-BM-gemma-2-9B-GGUF/resolve/main/MT-BM-gemma-2-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT-BM-gemma-2-9B-GGUF/resolve/main/MT-BM-gemma-2-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MT-BM-gemma-2-9B-GGUF/resolve/main/MT-BM-gemma-2-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT-BM-gemma-2-9B-GGUF/resolve/main/MT-BM-gemma-2-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT-BM-gemma-2-9B-GGUF/resolve/main/MT-BM-gemma-2-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT-BM-gemma-2-9B-GGUF/resolve/main/MT-BM-gemma-2-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT-BM-gemma-2-9B-GGUF/resolve/main/MT-BM-gemma-2-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT-BM-gemma-2-9B-GGUF/resolve/main/MT-BM-gemma-2-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT-BM-gemma-2-9B-GGUF/resolve/main/MT-BM-gemma-2-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT-BM-gemma-2-9B-GGUF/resolve/main/MT-BM-gemma-2-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MT-BM-gemma-2-9B-GGUF/resolve/main/MT-BM-gemma-2-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MT-BM-gemma-2-9B-GGUF/resolve/main/MT-BM-gemma-2-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Phi-4-jackterated-GGUF | mradermacher | 2025-01-09T04:40:37Z | 175 | 0 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"en",
"base_model:JackCloudman/Phi-4-jackterated",
"base_model:quantized:JackCloudman/Phi-4-jackterated",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-08T20:24:13Z | ---
base_model: JackCloudman/Phi-4-jackterated
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- abliterated
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/JackCloudman/Phi-4-jackterated
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Phi-4-jackterated-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-GGUF/resolve/main/Phi-4-jackterated.Q2_K.gguf) | Q2_K | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-GGUF/resolve/main/Phi-4-jackterated.Q3_K_S.gguf) | Q3_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-GGUF/resolve/main/Phi-4-jackterated.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-GGUF/resolve/main/Phi-4-jackterated.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-GGUF/resolve/main/Phi-4-jackterated.IQ4_XS.gguf) | IQ4_XS | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-GGUF/resolve/main/Phi-4-jackterated.Q4_K_S.gguf) | Q4_K_S | 8.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-GGUF/resolve/main/Phi-4-jackterated.Q4_K_M.gguf) | Q4_K_M | 9.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-GGUF/resolve/main/Phi-4-jackterated.Q5_K_S.gguf) | Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-GGUF/resolve/main/Phi-4-jackterated.Q5_K_M.gguf) | Q5_K_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-GGUF/resolve/main/Phi-4-jackterated.Q6_K.gguf) | Q6_K | 12.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi-4-jackterated-GGUF/resolve/main/Phi-4-jackterated.Q8_0.gguf) | Q8_0 | 15.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
fedovtt/d17e3bda-5714-459e-a71b-73e129eeeafe | fedovtt | 2025-01-09T04:40:25Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
]
| null | 2025-01-09T04:35:57Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d17e3bda-5714-459e-a71b-73e129eeeafe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 647c971518845557_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/647c971518845557_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: fedovtt/d17e3bda-5714-459e-a71b-73e129eeeafe
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/647c971518845557_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1c20bf9a-004b-40e9-b897-153b5b65d1aa
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1c20bf9a-004b-40e9-b897-153b5b65d1aa
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d17e3bda-5714-459e-a71b-73e129eeeafe
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 2.1085 |
| 2.1186 | 0.0013 | 8 | 1.9685 |
| 1.8305 | 0.0026 | 16 | 1.7843 |
| 1.6561 | 0.0039 | 24 | 1.7394 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vtriple/Qwen-2.5-7B-Threatflux-Q4_K_M-GGUF | vtriple | 2025-01-09T04:34:56Z | 66 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:vtriple/Qwen-2.5-7B-Threatflux",
"base_model:quantized:vtriple/Qwen-2.5-7B-Threatflux",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-09T04:34:34Z | ---
license: apache-2.0
base_model: vtriple/Qwen-2.5-7B-Threatflux
tags:
- llama-cpp
- gguf-my-repo
---
# vtriple/Qwen-2.5-7B-Threatflux-Q4_K_M-GGUF
This model was converted to GGUF format from [`vtriple/Qwen-2.5-7B-Threatflux`](https://huggingface.co/vtriple/Qwen-2.5-7B-Threatflux) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/vtriple/Qwen-2.5-7B-Threatflux) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo vtriple/Qwen-2.5-7B-Threatflux-Q4_K_M-GGUF --hf-file qwen-2.5-7b-threatflux-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo vtriple/Qwen-2.5-7B-Threatflux-Q4_K_M-GGUF --hf-file qwen-2.5-7b-threatflux-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo vtriple/Qwen-2.5-7B-Threatflux-Q4_K_M-GGUF --hf-file qwen-2.5-7b-threatflux-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo vtriple/Qwen-2.5-7B-Threatflux-Q4_K_M-GGUF --hf-file qwen-2.5-7b-threatflux-q4_k_m.gguf -c 2048
```
|
VERSIL91/3b6bab0a-febf-45b3-b00d-6b17237ec262 | VERSIL91 | 2025-01-09T04:34:52Z | 10 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"region:us"
]
| null | 2025-01-09T04:32:50Z | ---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3b6bab0a-febf-45b3-b00d-6b17237ec262
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: facebook/opt-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1e02c0fd496964cc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1e02c0fd496964cc_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/3b6bab0a-febf-45b3-b00d-6b17237ec262
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/1e02c0fd496964cc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3b6bab0a-febf-45b3-b00d-6b17237ec262
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 3b6bab0a-febf-45b3-b00d-6b17237ec262
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3b6bab0a-febf-45b3-b00d-6b17237ec262
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 44.8594 | 0.0007 | 1 | 2.7306 |
| 49.4531 | 0.0033 | 5 | 2.7168 |
| 47.6875 | 0.0065 | 10 | 2.6825 |
| 45.7188 | 0.0098 | 15 | 2.6543 |
| 45.0234 | 0.0130 | 20 | 2.6470 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
samoline/140e4060-14bf-47a8-9a28-726d943aefc9 | samoline | 2025-01-09T04:32:24Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B",
"license:llama3",
"region:us"
]
| null | 2025-01-09T04:28:42Z | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 140e4060-14bf-47a8-9a28-726d943aefc9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e23b3d7a0bb6b751_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e23b3d7a0bb6b751_train_data.json
type:
field_instruction: prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/140e4060-14bf-47a8-9a28-726d943aefc9
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/e23b3d7a0bb6b751_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: 2090ffe1-c3b0-40df-aa23-999090f36cd7
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: 2090ffe1-c3b0-40df-aa23-999090f36cd7
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 140e4060-14bf-47a8-9a28-726d943aefc9
This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6369 | 0.0001 | 1 | 0.7100 |
| 0.7058 | 0.0001 | 2 | 0.7097 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dzanbek/272fa23f-5941-44c4-9f54-bc964f647a9a | dzanbek | 2025-01-09T04:31:53Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"region:us"
]
| null | 2025-01-09T03:55:14Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 272fa23f-5941-44c4-9f54-bc964f647a9a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5c699451d3dc0028_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5c699451d3dc0028_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dzanbek/272fa23f-5941-44c4-9f54-bc964f647a9a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/5c699451d3dc0028_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4fd7ad13-9b31-4f42-9994-6d2cc4618ed6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4fd7ad13-9b31-4f42-9994-6d2cc4618ed6
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 272fa23f-5941-44c4-9f54-bc964f647a9a
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.3411 |
| 3.2032 | 0.0009 | 8 | 2.7878 |
| 2.5812 | 0.0017 | 16 | 2.4820 |
| 2.4787 | 0.0026 | 24 | 2.4098 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Zeezu/tinyBert-model | Zeezu | 2025-01-09T04:31:13Z | 5 | 0 | null | [
"pytorch",
"BERT",
"MNLI",
"NLI",
"transformer",
"pre-training",
"en",
"arxiv:1908.08962",
"arxiv:2110.01518",
"license:mit",
"region:us"
]
| null | 2025-01-09T04:18:17Z | ---
language:
- en
license:
- mit
tags:
- BERT
- MNLI
- NLI
- transformer
- pre-training
---
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-mini](https://huggingface.co/prajjwal1/bert-mini) [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
If you use the model, please consider citing both the papers:
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{DBLP:journals/corr/abs-1908-08962,
author = {Iulia Turc and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {Well-Read Students Learn Better: The Impact of Student Initialization
on Knowledge Distillation},
journal = {CoRR},
volume = {abs/1908.08962},
year = {2019},
url = {http://arxiv.org/abs/1908.08962},
eprinttype = {arXiv},
eprint = {1908.08962},
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Config of this model:
- `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
Other models to check out:
- `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
- `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
- `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
|
mini1013/master_item_top_bt9 | mini1013 | 2025-01-09T04:30:52Z | 53 | 0 | setfit | [
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
]
| text-classification | 2024-12-29T09:31:39Z | ---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 참존 디에이지 레드에디션 콘트롤크림 180ml (#M)홈>화장품/미용>마스크/팩>마사지크림/젤 Naverstore > 화장품/미용
> 마스크/팩 > 마사지크림/젤
- text: 바이오가 밀크 아미노산 크림 LotteOn > 뷰티 > 헤어/바디 > 바디케어 > 바디로션/크림 LotteOn > 뷰티 > 헤어/바디
> 바디케어 > 바디로션/크림
- text: 비오템 비오템 Life Plankton Sensitive Emulsion 50ml (#M)쿠팡 홈>뷰티>뷰티소품>용기/거울/기타소품>기타소품
LOREAL > Coupang > 비오템 > Branded > 비오템
- text: 푸드어홀릭 히알루론산 수분 젤 크림 300ml (#M)위메프 > 뷰티 > 스킨케어 > 크림 > 마사지크림 위메프 > 뷰티 > 스킨케어
> 크림 > 마사지크림
- text: 마몽드 로즈워터 토너 500ml × 1개 (#M)쿠팡 홈>뷰티>스킨케어>스킨 Coupang > 뷰티 > 스킨케어 > 스킨
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: mini1013/master_domain
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.912736827548057
name: Accuracy
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 11 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'휘또뷔스뜨 플러스 데콜테 50ml (가슴 에센스) ssg > 뷰티 > 헤어/바디 > 바디케어 > 바디로션/크림;SSG.COM/바디케어/바디로션/크림/오일/바디로션/크림;(#M)SSG.COM>바디케어>바디로션/크림/오일>바디로션/크림 ssg > 뷰티 > 바디케어 > 바디로션/크림/오일'</li><li>'메디필 나이테 실 넥크림 100ml (#M)11st>스킨케어>탄력크림>탄력크림 11st > 뷰티 > 스킨케어 > 탄력크림 > 탄력크림'</li><li>'[1만원 상품권][4][단독] 기적의 크림 60ml 세트 (+18만 5천원 상당 넥/데콜테 크림) 모이스춰라이징 소프트 크림 ssg > 뷰티 > 스킨케어 > 크림;ssg > 뷰티 > 명품화장품 > 스킨케어 세트;ssg > 뷰티 > 스킨케어 > 스킨케어세트 ssg > 뷰티 > 스킨케어 > 크림'</li></ul> |
| 4 | <ul><li>'[랑콤] 토닉 꽁포르 400ml 세트 (+이드라젠 크림 30ml 용량 추가 증정) 없음 (#M)홈>스킨케어>스킨/토너 HMALL > 현대백화점 > 화장품 > 스킨케어 > 스킨로션/미스트'</li><li>'아이오페 라이브 리프트 소프너 스킨 인텐시브 150ml MinSellAmount (#M)화장품/향수>스킨케어>스킨/토너 Gmarket > 뷰티 > 화장품/향수 > 스킨케어 > 스킨/토너'</li><li>'설린수 150ml 150~300ml LotteOn > 뷰티 > 스킨케어 > 스킨/토너 LotteOn > 뷰티 > 스킨케어 > 스킨/토너'</li></ul> |
| 8 | <ul><li>'그린티 히알루론산 로션 170mL 레티놀 시카 흔적 앰플 30mL + 레티놀 앰플 7mL (#M)위메프 > 뷰티 > 남성화장품 > 남성 스킨케어 > 남성스킨 위메프 > 뷰티 > 남성화장품 > 남성 스킨케어 > 남성스킨'</li><li>'아이오페 바이오 컨디셔닝 에센스 168ml 아이오페 바이오 컨디셔닝 에센스 216ml (#M)홈>화장품/미용>스킨케어>에센스 Naverstore > 화장품/미용 > 스킨케어 > 에센스'</li><li>'퓨어샷 나이트 리부트 세럼 50ml LotteOn > 백화점 TAP > 명품화장품 > 메인 배너 (PC) LotteOn > 뷰티 > 럭셔리 스킨케어 > 에센스/세럼'</li></ul> |
| 9 | <ul><li>'더테라피 로얄메이드 오일블렌딩 크림 50ml/JL MinSellAmount (#M)화장품/향수>스킨케어>크림/젤 Gmarket > 뷰티 > 화장품/향수 > 스킨케어 > 크림/젤'</li><li>'닥터지 레드 블레미쉬 클리어 수딩크림 70ml × 3개 Coupang > 뷰티 > 선물세트/키트 > 선물세트 > 스킨케어;(#M)쿠팡 홈>뷰티>선물세트/키트>선물세트>스킨케어 Coupang > 뷰티 > 선물세트/키트 > 선물세트 > 스킨케어'</li><li>'유리아쥬 제모스 세라뜨 200ml /HY MinSellAmount (#M)화장품/향수>스킨케어>크림/젤 Gmarket > 뷰티 > 화장품/향수 > 스킨케어 > 크림/젤'</li></ul> |
| 6 | <ul><li>'워터뱅크 크림 아이젤 라네즈 아이케어 보습 (#M)홈>화장품/미용>스킨케어>로션 Naverstore > 화장품/미용 > 스킨케어 > 로션'</li><li>'엑스트라 아이 리페어 인텐스 1+1 LotteOn > 백화점 > 뷰티 > 상단 배너 (Mobile) LotteOn > 뷰티 > 럭셔리 스킨케어 > 아이케어/넥케어'</li><li>'골드마스크 구매시 설화수샘플 자음생아이크림 7장증정 (#M)위메프 > 뷰티 > 스킨케어 > 팩/마스크 > 마스크시트팩 위메프 > 뷰티 > 스킨케어 > 팩/마스크 > 마스크시트팩'</li></ul> |
| 1 | <ul><li>'[최신제조] 설화수 자음유액 125ml LotteOn > 뷰티 > 스킨케어 > 아이케어/넥케어 LotteOn > 뷰티 > 스킨케어 > 아이케어/넥케어'</li><li>'바비 브라운 인텐시브 스킨 세럼 레디언스 에멀전 바비 브라운 인텐시브 스킨 세럼 레디언스 에멀전 홈>스킨케어>스킨/로션/올인원>스킨/토너;(#M)홈>스킨케어>토너/로션/올인원>스킨/토너 OLIVEYOUNG > 스킨케어 > 토너/로션/올인원 > 스킨/토너'</li><li>'한율 빨간쌀 진액에멀젼 125ml/로션+보습+피부방어력+피부장벽 LotteOn > 뷰티 > 스킨케어 > 아이케어/넥케어 LotteOn > 뷰티 > 스킨케어 > 아이케어/넥케어'</li></ul> |
| 3 | <ul><li>'은율 모이스처 글로우 멀티밤 10g 12개 LotteOn > 뷰티 > 남성화장품 > 남성화장품세트 LotteOn > 뷰티 > 남성화장품 > 남성화장품세트'</li><li>'KAHI 가히 1+1 링클바운스 멀티밤 수분 주름 스틱 보습 주름케어 LotteOn > 뷰티 > 스킨케어 > 미스트 LotteOn > 뷰티 > 스킨케어 > 미스트'</li><li>'가히 서울 링클 바운스 멀티밤 9g LotteOn > 뷰티 > 스킨케어 > 크림 LotteOn > 뷰티 > 스킨케어 > 크림'</li></ul> |
| 7 | <ul><li>'NEW 모이스춰 써지 아이 96-아워 하이드로 컨센트레이트 15ml LotteOn > 뷰티 > 스킨케어 > 아이케어 LotteOn > 뷰티 > 럭셔리 스킨케어 > 아이케어/넥케어'</li><li>'SNP 골드 콜라겐 니들 패치 3박스 (24매) (#M)11st>스킨케어>팩/마스크>고무팩 11st > 뷰티 > 스킨케어 > 팩/마스크 > 고무팩'</li><li>'프럼네이처 골드 하이드로겔 아이패치 60매입 × 1개 쿠팡 홈>뷰티>스킨케어>마스크/팩>코팩/기타패치>아이 패치;쿠팡 홈>뷰티>스킨케어>마스크/팩>패치/코팩>기타패치;(#M)쿠팡 홈>뷰티>스킨케어>마스크/팩>패치/코팩>아이 패치 Coupang > 뷰티 > 스킨케어 > 마스크/팩 > 패치/코팩 > 아이 패치'</li></ul> |
| 10 | <ul><li>'블랙티 유스 인핸싱 오일 30mL LotteOn > 뷰티 > 스킨케어 > 오일 LotteOn > 뷰티 > 스킨케어 > 오일'</li><li>'[3월][한정] 아베이 로얄 유쓰 워터리 오일 50ml 세트 아베이 로얄 유쓰 워터리 오일 LotteOn > 뷰티 > 명품화장품 > 스킨케어 > 오일 LotteOn > 뷰티 > 명품화장품 > 스킨케어 > 오일'</li><li>'청미정 비타민나무 페이스오일 LotteOn > 뷰티 > 스킨케어 > 오일 LotteOn > 뷰티 > 스킨케어 > 오일'</li></ul> |
| 2 | <ul><li>'[본사직영] 글로우 스킨밤 투 고 미스트 80 ml 위메프 > 뷰티 > 메이크업 > 립 메이크업;위메프 > 뷰티 > 메이크업 > 립 메이크업 > 립글로즈;위메프 > 뷰티 > 스킨케어 > 앰플/에센스/세럼 > 에센스;위메프 > 뷰티 > 스킨케어 > 미스트;(#M)위메프 > 뷰티 > 스킨케어 > 미스트 > 미스트 위메프 > 뷰티 > 스킨케어 > 미스트'</li><li>'메이블린 뉴욕 래스팅 픽스 스프레이 60ml × 1개 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 베이스/프라이머 LotteOn > 뷰티 > 메이크업 > 베이스메이크업 > 베이스/프라이머'</li><li>'(아벤느 공식판매) 오떼르말 300ml(1+1)_AN08-2 A 화장품|미용>헤어케어|염색>샴푸린스>샴푸;(#M)홈>화장품/미용>헤어케어|염색>샴푸린스>샴푸 HMALL > 뷰티 > 화장품/미용 > 헤어케어 > 샴푸린스 > 샴푸'</li></ul> |
| 5 | <ul><li>'아이오페 여성화장품 라이브 리프트 스페셜 2종세트 MinSellAmount (#M)화장품/향수>스킨케어>페이스오일 Gmarket > 뷰티 > 화장품/향수 > 스킨케어 > 페이스오일'</li><li>'설화수 탄력 에센셜 3종기획세트 탄력크림 기초화장품 30대여자화장품 추천 LotteOn > 뷰티 > 스킨케어 > 화장품세트 LotteOn > 뷰티 > 스킨케어 > 화장품세트'</li><li>'그린티 스킨케어세트 이니스프리 그린티세트 밸런싱 (#M)위메프 > 생활·주방용품 > 바디/헤어 > 바디로션/핸드/풋 > 생활선물세트 위메프 > 뷰티 > 바디/헤어 > 바디로션/핸드/풋 > 생활선물세트'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9127 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_item_top_bt9")
# Run inference
preds = model("마몽드 로즈워터 토너 500ml × 1개 (#M)쿠팡 홈>뷰티>스킨케어>스킨 Coupang > 뷰티 > 스킨케어 > 스킨")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 10 | 20.6873 | 55 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 50 |
| 1 | 50 |
| 2 | 50 |
| 3 | 50 |
| 4 | 50 |
| 5 | 50 |
| 6 | 50 |
| 7 | 50 |
| 8 | 50 |
| 9 | 50 |
| 10 | 50 |
### Training Hyperparameters
- batch_size: (64, 64)
- num_epochs: (30, 30)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 100
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:-----:|:-------------:|:---------------:|
| 0.0012 | 1 | 0.3079 | - |
| 0.0581 | 50 | 0.3024 | - |
| 0.1163 | 100 | 0.288 | - |
| 0.1744 | 150 | 0.2621 | - |
| 0.2326 | 200 | 0.2186 | - |
| 0.2907 | 250 | 0.191 | - |
| 0.3488 | 300 | 0.1552 | - |
| 0.4070 | 350 | 0.1255 | - |
| 0.4651 | 400 | 0.1053 | - |
| 0.5233 | 450 | 0.0908 | - |
| 0.5814 | 500 | 0.0691 | - |
| 0.6395 | 550 | 0.0665 | - |
| 0.6977 | 600 | 0.053 | - |
| 0.7558 | 650 | 0.0438 | - |
| 0.8140 | 700 | 0.0407 | - |
| 0.8721 | 750 | 0.0325 | - |
| 0.9302 | 800 | 0.0277 | - |
| 0.9884 | 850 | 0.0232 | - |
| 1.0465 | 900 | 0.0197 | - |
| 1.1047 | 950 | 0.0171 | - |
| 1.1628 | 1000 | 0.0137 | - |
| 1.2209 | 1050 | 0.0113 | - |
| 1.2791 | 1100 | 0.0104 | - |
| 1.3372 | 1150 | 0.0109 | - |
| 1.3953 | 1200 | 0.0086 | - |
| 1.4535 | 1250 | 0.0075 | - |
| 1.5116 | 1300 | 0.0065 | - |
| 1.5698 | 1350 | 0.0075 | - |
| 1.6279 | 1400 | 0.0071 | - |
| 1.6860 | 1450 | 0.0072 | - |
| 1.7442 | 1500 | 0.0081 | - |
| 1.8023 | 1550 | 0.006 | - |
| 1.8605 | 1600 | 0.0062 | - |
| 1.9186 | 1650 | 0.0034 | - |
| 1.9767 | 1700 | 0.0019 | - |
| 2.0349 | 1750 | 0.0023 | - |
| 2.0930 | 1800 | 0.0019 | - |
| 2.1512 | 1850 | 0.0012 | - |
| 2.2093 | 1900 | 0.0009 | - |
| 2.2674 | 1950 | 0.0007 | - |
| 2.3256 | 2000 | 0.0001 | - |
| 2.3837 | 2050 | 0.0004 | - |
| 2.4419 | 2100 | 0.0008 | - |
| 2.5 | 2150 | 0.0011 | - |
| 2.5581 | 2200 | 0.0012 | - |
| 2.6163 | 2250 | 0.0009 | - |
| 2.6744 | 2300 | 0.0008 | - |
| 2.7326 | 2350 | 0.0006 | - |
| 2.7907 | 2400 | 0.0001 | - |
| 2.8488 | 2450 | 0.0002 | - |
| 2.9070 | 2500 | 0.0002 | - |
| 2.9651 | 2550 | 0.0004 | - |
| 3.0233 | 2600 | 0.0 | - |
| 3.0814 | 2650 | 0.0 | - |
| 3.1395 | 2700 | 0.0 | - |
| 3.1977 | 2750 | 0.0 | - |
| 3.2558 | 2800 | 0.0 | - |
| 3.3140 | 2850 | 0.0 | - |
| 3.3721 | 2900 | 0.0 | - |
| 3.4302 | 2950 | 0.0008 | - |
| 3.4884 | 3000 | 0.0008 | - |
| 3.5465 | 3050 | 0.0 | - |
| 3.6047 | 3100 | 0.0004 | - |
| 3.6628 | 3150 | 0.0026 | - |
| 3.7209 | 3200 | 0.0033 | - |
| 3.7791 | 3250 | 0.0013 | - |
| 3.8372 | 3300 | 0.0002 | - |
| 3.8953 | 3350 | 0.001 | - |
| 3.9535 | 3400 | 0.0006 | - |
| 4.0116 | 3450 | 0.0 | - |
| 4.0698 | 3500 | 0.0 | - |
| 4.1279 | 3550 | 0.0 | - |
| 4.1860 | 3600 | 0.0 | - |
| 4.2442 | 3650 | 0.0002 | - |
| 4.3023 | 3700 | 0.0002 | - |
| 4.3605 | 3750 | 0.0002 | - |
| 4.4186 | 3800 | 0.0 | - |
| 4.4767 | 3850 | 0.0 | - |
| 4.5349 | 3900 | 0.0 | - |
| 4.5930 | 3950 | 0.0 | - |
| 4.6512 | 4000 | 0.0 | - |
| 4.7093 | 4050 | 0.0 | - |
| 4.7674 | 4100 | 0.0 | - |
| 4.8256 | 4150 | 0.0 | - |
| 4.8837 | 4200 | 0.0 | - |
| 4.9419 | 4250 | 0.0 | - |
| 5.0 | 4300 | 0.0 | - |
| 5.0581 | 4350 | 0.0 | - |
| 5.1163 | 4400 | 0.0 | - |
| 5.1744 | 4450 | 0.0 | - |
| 5.2326 | 4500 | 0.0 | - |
| 5.2907 | 4550 | 0.0 | - |
| 5.3488 | 4600 | 0.0 | - |
| 5.4070 | 4650 | 0.0 | - |
| 5.4651 | 4700 | 0.0 | - |
| 5.5233 | 4750 | 0.0 | - |
| 5.5814 | 4800 | 0.0 | - |
| 5.6395 | 4850 | 0.0 | - |
| 5.6977 | 4900 | 0.0 | - |
| 5.7558 | 4950 | 0.0 | - |
| 5.8140 | 5000 | 0.0 | - |
| 5.8721 | 5050 | 0.0 | - |
| 5.9302 | 5100 | 0.0 | - |
| 5.9884 | 5150 | 0.0 | - |
| 6.0465 | 5200 | 0.0 | - |
| 6.1047 | 5250 | 0.0 | - |
| 6.1628 | 5300 | 0.0 | - |
| 6.2209 | 5350 | 0.0 | - |
| 6.2791 | 5400 | 0.0 | - |
| 6.3372 | 5450 | 0.0 | - |
| 6.3953 | 5500 | 0.0 | - |
| 6.4535 | 5550 | 0.0 | - |
| 6.5116 | 5600 | 0.0 | - |
| 6.5698 | 5650 | 0.0 | - |
| 6.6279 | 5700 | 0.0 | - |
| 6.6860 | 5750 | 0.0 | - |
| 6.7442 | 5800 | 0.0 | - |
| 6.8023 | 5850 | 0.0 | - |
| 6.8605 | 5900 | 0.0 | - |
| 6.9186 | 5950 | 0.0 | - |
| 6.9767 | 6000 | 0.0 | - |
| 7.0349 | 6050 | 0.0 | - |
| 7.0930 | 6100 | 0.0 | - |
| 7.1512 | 6150 | 0.0 | - |
| 7.2093 | 6200 | 0.0 | - |
| 7.2674 | 6250 | 0.0 | - |
| 7.3256 | 6300 | 0.0 | - |
| 7.3837 | 6350 | 0.0 | - |
| 7.4419 | 6400 | 0.0 | - |
| 7.5 | 6450 | 0.0 | - |
| 7.5581 | 6500 | 0.0 | - |
| 7.6163 | 6550 | 0.0 | - |
| 7.6744 | 6600 | 0.0 | - |
| 7.7326 | 6650 | 0.0 | - |
| 7.7907 | 6700 | 0.0 | - |
| 7.8488 | 6750 | 0.0 | - |
| 7.9070 | 6800 | 0.0 | - |
| 7.9651 | 6850 | 0.0 | - |
| 8.0233 | 6900 | 0.0 | - |
| 8.0814 | 6950 | 0.0005 | - |
| 8.1395 | 7000 | 0.0089 | - |
| 8.1977 | 7050 | 0.0043 | - |
| 8.2558 | 7100 | 0.0012 | - |
| 8.3140 | 7150 | 0.0003 | - |
| 8.3721 | 7200 | 0.0003 | - |
| 8.4302 | 7250 | 0.0003 | - |
| 8.4884 | 7300 | 0.0001 | - |
| 8.5465 | 7350 | 0.0 | - |
| 8.6047 | 7400 | 0.0 | - |
| 8.6628 | 7450 | 0.0 | - |
| 8.7209 | 7500 | 0.0 | - |
| 8.7791 | 7550 | 0.0 | - |
| 8.8372 | 7600 | 0.0 | - |
| 8.8953 | 7650 | 0.0 | - |
| 8.9535 | 7700 | 0.0 | - |
| 9.0116 | 7750 | 0.0 | - |
| 9.0698 | 7800 | 0.0 | - |
| 9.1279 | 7850 | 0.0 | - |
| 9.1860 | 7900 | 0.0 | - |
| 9.2442 | 7950 | 0.0 | - |
| 9.3023 | 8000 | 0.0 | - |
| 9.3605 | 8050 | 0.0 | - |
| 9.4186 | 8100 | 0.0 | - |
| 9.4767 | 8150 | 0.0 | - |
| 9.5349 | 8200 | 0.0 | - |
| 9.5930 | 8250 | 0.0 | - |
| 9.6512 | 8300 | 0.0 | - |
| 9.7093 | 8350 | 0.0 | - |
| 9.7674 | 8400 | 0.0 | - |
| 9.8256 | 8450 | 0.0 | - |
| 9.8837 | 8500 | 0.0 | - |
| 9.9419 | 8550 | 0.0 | - |
| 10.0 | 8600 | 0.0 | - |
| 10.0581 | 8650 | 0.0 | - |
| 10.1163 | 8700 | 0.0 | - |
| 10.1744 | 8750 | 0.0 | - |
| 10.2326 | 8800 | 0.0 | - |
| 10.2907 | 8850 | 0.0 | - |
| 10.3488 | 8900 | 0.0 | - |
| 10.4070 | 8950 | 0.0 | - |
| 10.4651 | 9000 | 0.0 | - |
| 10.5233 | 9050 | 0.0 | - |
| 10.5814 | 9100 | 0.0 | - |
| 10.6395 | 9150 | 0.0 | - |
| 10.6977 | 9200 | 0.0 | - |
| 10.7558 | 9250 | 0.0 | - |
| 10.8140 | 9300 | 0.0 | - |
| 10.8721 | 9350 | 0.0 | - |
| 10.9302 | 9400 | 0.0 | - |
| 10.9884 | 9450 | 0.0 | - |
| 11.0465 | 9500 | 0.0 | - |
| 11.1047 | 9550 | 0.0 | - |
| 11.1628 | 9600 | 0.0 | - |
| 11.2209 | 9650 | 0.0 | - |
| 11.2791 | 9700 | 0.0 | - |
| 11.3372 | 9750 | 0.0 | - |
| 11.3953 | 9800 | 0.0 | - |
| 11.4535 | 9850 | 0.0 | - |
| 11.5116 | 9900 | 0.0 | - |
| 11.5698 | 9950 | 0.0 | - |
| 11.6279 | 10000 | 0.0 | - |
| 11.6860 | 10050 | 0.0 | - |
| 11.7442 | 10100 | 0.0 | - |
| 11.8023 | 10150 | 0.0 | - |
| 11.8605 | 10200 | 0.0 | - |
| 11.9186 | 10250 | 0.0 | - |
| 11.9767 | 10300 | 0.0 | - |
| 12.0349 | 10350 | 0.0 | - |
| 12.0930 | 10400 | 0.0 | - |
| 12.1512 | 10450 | 0.0 | - |
| 12.2093 | 10500 | 0.0 | - |
| 12.2674 | 10550 | 0.0 | - |
| 12.3256 | 10600 | 0.0 | - |
| 12.3837 | 10650 | 0.0 | - |
| 12.4419 | 10700 | 0.0 | - |
| 12.5 | 10750 | 0.0 | - |
| 12.5581 | 10800 | 0.0 | - |
| 12.6163 | 10850 | 0.0 | - |
| 12.6744 | 10900 | 0.0 | - |
| 12.7326 | 10950 | 0.0 | - |
| 12.7907 | 11000 | 0.0 | - |
| 12.8488 | 11050 | 0.0 | - |
| 12.9070 | 11100 | 0.0 | - |
| 12.9651 | 11150 | 0.0 | - |
| 13.0233 | 11200 | 0.0 | - |
| 13.0814 | 11250 | 0.0 | - |
| 13.1395 | 11300 | 0.0 | - |
| 13.1977 | 11350 | 0.0 | - |
| 13.2558 | 11400 | 0.0 | - |
| 13.3140 | 11450 | 0.0 | - |
| 13.3721 | 11500 | 0.0 | - |
| 13.4302 | 11550 | 0.0 | - |
| 13.4884 | 11600 | 0.0 | - |
| 13.5465 | 11650 | 0.0 | - |
| 13.6047 | 11700 | 0.0 | - |
| 13.6628 | 11750 | 0.0 | - |
| 13.7209 | 11800 | 0.0 | - |
| 13.7791 | 11850 | 0.0 | - |
| 13.8372 | 11900 | 0.0 | - |
| 13.8953 | 11950 | 0.0 | - |
| 13.9535 | 12000 | 0.0 | - |
| 14.0116 | 12050 | 0.0 | - |
| 14.0698 | 12100 | 0.0 | - |
| 14.1279 | 12150 | 0.0 | - |
| 14.1860 | 12200 | 0.0 | - |
| 14.2442 | 12250 | 0.0 | - |
| 14.3023 | 12300 | 0.0 | - |
| 14.3605 | 12350 | 0.0 | - |
| 14.4186 | 12400 | 0.0 | - |
| 14.4767 | 12450 | 0.0 | - |
| 14.5349 | 12500 | 0.0 | - |
| 14.5930 | 12550 | 0.0 | - |
| 14.6512 | 12600 | 0.0 | - |
| 14.7093 | 12650 | 0.0 | - |
| 14.7674 | 12700 | 0.0 | - |
| 14.8256 | 12750 | 0.0 | - |
| 14.8837 | 12800 | 0.0 | - |
| 14.9419 | 12850 | 0.0 | - |
| 15.0 | 12900 | 0.0 | - |
| 15.0581 | 12950 | 0.0 | - |
| 15.1163 | 13000 | 0.0 | - |
| 15.1744 | 13050 | 0.0 | - |
| 15.2326 | 13100 | 0.0 | - |
| 15.2907 | 13150 | 0.0 | - |
| 15.3488 | 13200 | 0.0 | - |
| 15.4070 | 13250 | 0.0 | - |
| 15.4651 | 13300 | 0.0 | - |
| 15.5233 | 13350 | 0.0 | - |
| 15.5814 | 13400 | 0.0 | - |
| 15.6395 | 13450 | 0.0 | - |
| 15.6977 | 13500 | 0.0 | - |
| 15.7558 | 13550 | 0.0 | - |
| 15.8140 | 13600 | 0.0 | - |
| 15.8721 | 13650 | 0.0 | - |
| 15.9302 | 13700 | 0.0 | - |
| 15.9884 | 13750 | 0.0 | - |
| 16.0465 | 13800 | 0.0 | - |
| 16.1047 | 13850 | 0.0 | - |
| 16.1628 | 13900 | 0.0 | - |
| 16.2209 | 13950 | 0.0 | - |
| 16.2791 | 14000 | 0.0 | - |
| 16.3372 | 14050 | 0.0 | - |
| 16.3953 | 14100 | 0.0 | - |
| 16.4535 | 14150 | 0.0 | - |
| 16.5116 | 14200 | 0.0 | - |
| 16.5698 | 14250 | 0.0 | - |
| 16.6279 | 14300 | 0.0 | - |
| 16.6860 | 14350 | 0.0 | - |
| 16.7442 | 14400 | 0.0 | - |
| 16.8023 | 14450 | 0.0 | - |
| 16.8605 | 14500 | 0.0 | - |
| 16.9186 | 14550 | 0.0 | - |
| 16.9767 | 14600 | 0.0 | - |
| 17.0349 | 14650 | 0.0 | - |
| 17.0930 | 14700 | 0.0 | - |
| 17.1512 | 14750 | 0.0 | - |
| 17.2093 | 14800 | 0.0 | - |
| 17.2674 | 14850 | 0.0 | - |
| 17.3256 | 14900 | 0.0 | - |
| 17.3837 | 14950 | 0.0 | - |
| 17.4419 | 15000 | 0.0 | - |
| 17.5 | 15050 | 0.0 | - |
| 17.5581 | 15100 | 0.0 | - |
| 17.6163 | 15150 | 0.0 | - |
| 17.6744 | 15200 | 0.0 | - |
| 17.7326 | 15250 | 0.0 | - |
| 17.7907 | 15300 | 0.0 | - |
| 17.8488 | 15350 | 0.0 | - |
| 17.9070 | 15400 | 0.0 | - |
| 17.9651 | 15450 | 0.0 | - |
| 18.0233 | 15500 | 0.0 | - |
| 18.0814 | 15550 | 0.0 | - |
| 18.1395 | 15600 | 0.0 | - |
| 18.1977 | 15650 | 0.0 | - |
| 18.2558 | 15700 | 0.0 | - |
| 18.3140 | 15750 | 0.0 | - |
| 18.3721 | 15800 | 0.0 | - |
| 18.4302 | 15850 | 0.0 | - |
| 18.4884 | 15900 | 0.0 | - |
| 18.5465 | 15950 | 0.0 | - |
| 18.6047 | 16000 | 0.0 | - |
| 18.6628 | 16050 | 0.0 | - |
| 18.7209 | 16100 | 0.0 | - |
| 18.7791 | 16150 | 0.0 | - |
| 18.8372 | 16200 | 0.0 | - |
| 18.8953 | 16250 | 0.0 | - |
| 18.9535 | 16300 | 0.0 | - |
| 19.0116 | 16350 | 0.0 | - |
| 19.0698 | 16400 | 0.0 | - |
| 19.1279 | 16450 | 0.0 | - |
| 19.1860 | 16500 | 0.0 | - |
| 19.2442 | 16550 | 0.0 | - |
| 19.3023 | 16600 | 0.0 | - |
| 19.3605 | 16650 | 0.0 | - |
| 19.4186 | 16700 | 0.0 | - |
| 19.4767 | 16750 | 0.0 | - |
| 19.5349 | 16800 | 0.0 | - |
| 19.5930 | 16850 | 0.0 | - |
| 19.6512 | 16900 | 0.0 | - |
| 19.7093 | 16950 | 0.0 | - |
| 19.7674 | 17000 | 0.0 | - |
| 19.8256 | 17050 | 0.0 | - |
| 19.8837 | 17100 | 0.0 | - |
| 19.9419 | 17150 | 0.0 | - |
| 20.0 | 17200 | 0.0 | - |
| 20.0581 | 17250 | 0.0 | - |
| 20.1163 | 17300 | 0.0 | - |
| 20.1744 | 17350 | 0.0 | - |
| 20.2326 | 17400 | 0.0 | - |
| 20.2907 | 17450 | 0.0 | - |
| 20.3488 | 17500 | 0.0 | - |
| 20.4070 | 17550 | 0.0 | - |
| 20.4651 | 17600 | 0.0 | - |
| 20.5233 | 17650 | 0.0 | - |
| 20.5814 | 17700 | 0.0 | - |
| 20.6395 | 17750 | 0.0 | - |
| 20.6977 | 17800 | 0.0 | - |
| 20.7558 | 17850 | 0.0 | - |
| 20.8140 | 17900 | 0.0 | - |
| 20.8721 | 17950 | 0.0 | - |
| 20.9302 | 18000 | 0.0 | - |
| 20.9884 | 18050 | 0.0 | - |
| 21.0465 | 18100 | 0.0 | - |
| 21.1047 | 18150 | 0.0 | - |
| 21.1628 | 18200 | 0.0 | - |
| 21.2209 | 18250 | 0.0 | - |
| 21.2791 | 18300 | 0.0 | - |
| 21.3372 | 18350 | 0.0 | - |
| 21.3953 | 18400 | 0.0 | - |
| 21.4535 | 18450 | 0.0 | - |
| 21.5116 | 18500 | 0.0 | - |
| 21.5698 | 18550 | 0.0 | - |
| 21.6279 | 18600 | 0.0 | - |
| 21.6860 | 18650 | 0.0 | - |
| 21.7442 | 18700 | 0.0 | - |
| 21.8023 | 18750 | 0.0 | - |
| 21.8605 | 18800 | 0.0 | - |
| 21.9186 | 18850 | 0.0 | - |
| 21.9767 | 18900 | 0.0 | - |
| 22.0349 | 18950 | 0.0 | - |
| 22.0930 | 19000 | 0.0 | - |
| 22.1512 | 19050 | 0.0 | - |
| 22.2093 | 19100 | 0.0 | - |
| 22.2674 | 19150 | 0.0 | - |
| 22.3256 | 19200 | 0.0 | - |
| 22.3837 | 19250 | 0.0 | - |
| 22.4419 | 19300 | 0.0 | - |
| 22.5 | 19350 | 0.0 | - |
| 22.5581 | 19400 | 0.0 | - |
| 22.6163 | 19450 | 0.0 | - |
| 22.6744 | 19500 | 0.0 | - |
| 22.7326 | 19550 | 0.0 | - |
| 22.7907 | 19600 | 0.0 | - |
| 22.8488 | 19650 | 0.0 | - |
| 22.9070 | 19700 | 0.0 | - |
| 22.9651 | 19750 | 0.0 | - |
| 23.0233 | 19800 | 0.0 | - |
| 23.0814 | 19850 | 0.0 | - |
| 23.1395 | 19900 | 0.0 | - |
| 23.1977 | 19950 | 0.0 | - |
| 23.2558 | 20000 | 0.0 | - |
| 23.3140 | 20050 | 0.0 | - |
| 23.3721 | 20100 | 0.0 | - |
| 23.4302 | 20150 | 0.0 | - |
| 23.4884 | 20200 | 0.0 | - |
| 23.5465 | 20250 | 0.0 | - |
| 23.6047 | 20300 | 0.0 | - |
| 23.6628 | 20350 | 0.0 | - |
| 23.7209 | 20400 | 0.0 | - |
| 23.7791 | 20450 | 0.0 | - |
| 23.8372 | 20500 | 0.0 | - |
| 23.8953 | 20550 | 0.0 | - |
| 23.9535 | 20600 | 0.0 | - |
| 24.0116 | 20650 | 0.0 | - |
| 24.0698 | 20700 | 0.0 | - |
| 24.1279 | 20750 | 0.0 | - |
| 24.1860 | 20800 | 0.0 | - |
| 24.2442 | 20850 | 0.0 | - |
| 24.3023 | 20900 | 0.0 | - |
| 24.3605 | 20950 | 0.0 | - |
| 24.4186 | 21000 | 0.0 | - |
| 24.4767 | 21050 | 0.0 | - |
| 24.5349 | 21100 | 0.0 | - |
| 24.5930 | 21150 | 0.0 | - |
| 24.6512 | 21200 | 0.0003 | - |
| 24.7093 | 21250 | 0.0001 | - |
| 24.7674 | 21300 | 0.0001 | - |
| 24.8256 | 21350 | 0.0 | - |
| 24.8837 | 21400 | 0.0 | - |
| 24.9419 | 21450 | 0.0 | - |
| 25.0 | 21500 | 0.0 | - |
| 25.0581 | 21550 | 0.0 | - |
| 25.1163 | 21600 | 0.0 | - |
| 25.1744 | 21650 | 0.0 | - |
| 25.2326 | 21700 | 0.0 | - |
| 25.2907 | 21750 | 0.0 | - |
| 25.3488 | 21800 | 0.0 | - |
| 25.4070 | 21850 | 0.0 | - |
| 25.4651 | 21900 | 0.0 | - |
| 25.5233 | 21950 | 0.0 | - |
| 25.5814 | 22000 | 0.0 | - |
| 25.6395 | 22050 | 0.0 | - |
| 25.6977 | 22100 | 0.0 | - |
| 25.7558 | 22150 | 0.0 | - |
| 25.8140 | 22200 | 0.0 | - |
| 25.8721 | 22250 | 0.0 | - |
| 25.9302 | 22300 | 0.0 | - |
| 25.9884 | 22350 | 0.0 | - |
| 26.0465 | 22400 | 0.0 | - |
| 26.1047 | 22450 | 0.0 | - |
| 26.1628 | 22500 | 0.0 | - |
| 26.2209 | 22550 | 0.0 | - |
| 26.2791 | 22600 | 0.0 | - |
| 26.3372 | 22650 | 0.0 | - |
| 26.3953 | 22700 | 0.0 | - |
| 26.4535 | 22750 | 0.0 | - |
| 26.5116 | 22800 | 0.0 | - |
| 26.5698 | 22850 | 0.0 | - |
| 26.6279 | 22900 | 0.0 | - |
| 26.6860 | 22950 | 0.0 | - |
| 26.7442 | 23000 | 0.0 | - |
| 26.8023 | 23050 | 0.0 | - |
| 26.8605 | 23100 | 0.0 | - |
| 26.9186 | 23150 | 0.0 | - |
| 26.9767 | 23200 | 0.0 | - |
| 27.0349 | 23250 | 0.0 | - |
| 27.0930 | 23300 | 0.0 | - |
| 27.1512 | 23350 | 0.0 | - |
| 27.2093 | 23400 | 0.0 | - |
| 27.2674 | 23450 | 0.0 | - |
| 27.3256 | 23500 | 0.0 | - |
| 27.3837 | 23550 | 0.0 | - |
| 27.4419 | 23600 | 0.0 | - |
| 27.5 | 23650 | 0.0 | - |
| 27.5581 | 23700 | 0.0 | - |
| 27.6163 | 23750 | 0.0 | - |
| 27.6744 | 23800 | 0.0 | - |
| 27.7326 | 23850 | 0.0 | - |
| 27.7907 | 23900 | 0.0 | - |
| 27.8488 | 23950 | 0.0 | - |
| 27.9070 | 24000 | 0.0 | - |
| 27.9651 | 24050 | 0.0 | - |
| 28.0233 | 24100 | 0.0 | - |
| 28.0814 | 24150 | 0.0 | - |
| 28.1395 | 24200 | 0.0 | - |
| 28.1977 | 24250 | 0.0 | - |
| 28.2558 | 24300 | 0.0 | - |
| 28.3140 | 24350 | 0.0 | - |
| 28.3721 | 24400 | 0.0 | - |
| 28.4302 | 24450 | 0.0 | - |
| 28.4884 | 24500 | 0.0 | - |
| 28.5465 | 24550 | 0.0 | - |
| 28.6047 | 24600 | 0.0 | - |
| 28.6628 | 24650 | 0.0 | - |
| 28.7209 | 24700 | 0.0 | - |
| 28.7791 | 24750 | 0.0 | - |
| 28.8372 | 24800 | 0.0 | - |
| 28.8953 | 24850 | 0.0 | - |
| 28.9535 | 24900 | 0.0 | - |
| 29.0116 | 24950 | 0.0 | - |
| 29.0698 | 25000 | 0.0 | - |
| 29.1279 | 25050 | 0.0 | - |
| 29.1860 | 25100 | 0.0 | - |
| 29.2442 | 25150 | 0.0 | - |
| 29.3023 | 25200 | 0.0 | - |
| 29.3605 | 25250 | 0.0 | - |
| 29.4186 | 25300 | 0.0 | - |
| 29.4767 | 25350 | 0.0 | - |
| 29.5349 | 25400 | 0.0 | - |
| 29.5930 | 25450 | 0.0 | - |
| 29.6512 | 25500 | 0.0 | - |
| 29.7093 | 25550 | 0.0 | - |
| 29.7674 | 25600 | 0.0 | - |
| 29.8256 | 25650 | 0.0 | - |
| 29.8837 | 25700 | 0.0 | - |
| 29.9419 | 25750 | 0.0 | - |
| 30.0 | 25800 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0
- Sentence Transformers: 3.3.1
- Transformers: 4.44.2
- PyTorch: 2.2.0a0+81ea7a4
- Datasets: 3.2.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mipat12/dore-phase1-4e-4-ss3.0-crops | mipat12 | 2025-01-09T04:29:51Z | 134 | 0 | diffusers | [
"diffusers",
"flux",
"flux-diffusers",
"text-to-image",
"simpletuner",
"safe-for-work",
"lora",
"template:sd-lora",
"lycoris",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-01-08T03:24:10Z | ---
license: other
base_model: "black-forest-labs/FLUX.1-dev"
tags:
- flux
- flux-diffusers
- text-to-image
- diffusers
- simpletuner
- safe-for-work
- lora
- template:sd-lora
- lycoris
inference: true
widget:
- text: 'unconditional (blank prompt)'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_0_0.png
- text: 'a hipster man with a beard, building a chair in the style of a d0r3 engraving.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_1_0.png
- text: 'in the style of a d0r3 engraving, Three elderly women huddle together, their robes intertwined as they share a scroll between them. Their faces show deep concentration, with pronounced wrinkles and hollow cheeks.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_2_0.png
- text: 'a hamster in the style of a d0r3 engraving.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_3_0.png
- text: 'in the style of a d0r3 engraving, A young girl stands on tiptoes reaching upward, her hair falling in loose waves. A ribbon streams behind her, caught in an invisible wind. The base beneath her feet shows carved clouds, suggesting she floats between earth and sky.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_4_0.png
- text: 'a man holding a sign that says, ''this is a sign'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_5_0.png
- text: 'a pig, in a post apocalyptic world, with a shotgun, in a leather jacket, in a desert, with a motorcycle'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_6_0.png
- text: 'woman holding a sign that says ''I LOVE PROMPTS!'' in the style of a d0r3 engraving'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_7_0.png
- text: 'two men in robes with laurel wreaths in a haunted forest with gnarled branches in the style of a d0r3 engraving.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_8_0.png
- text: 'A person kneels on the ground with a staff. Three figures with wings stand elevated on the left side. Background shows a cloudy sky and hilly terrain in the style of a d0r3 engraving.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_9_0.png
---
# dore-phase1-4e-4-ss3.0-crops
This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
No validation prompt was used during training.
None
## Validation settings
- CFG: `4.0`
- CFG Rescale: `0.0`
- Steps: `20`
- Sampler: `FlowMatchEulerDiscreteScheduler`
- Seed: `42`
- Resolution: `768x1024`
- Skip-layer guidance:
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 2
- Training steps: 5000
- Learning rate: 0.0004
- Learning rate schedule: polynomial
- Warmup steps: 100
- Max grad norm: 0.1
- Effective batch size: 3
- Micro-batch size: 3
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Gradient checkpointing: True
- Prediction type: flow-matching (extra parameters=['shift=3.0', 'flux_guidance_mode=constant', 'flux_guidance_value=4.0', 'flow_matching_loss=compatible'])
- Optimizer: adamw_bf16
- Trainable parameter precision: Pure BF16
- Caption dropout probability: 10.0%
### LyCORIS Config:
```json
{
"algo": "lokr",
"multiplier": 1.0,
"linear_dim": 10000,
"linear_alpha": 1,
"factor": 16,
"apply_preset": {
"target_module": [
"Attention",
"FeedForward"
],
"module_algo_map": {
"Attention": {
"factor": 16
},
"FeedForward": {
"factor": 8
}
}
}
}
```
## Datasets
### dore-background-512
- Repeats: 22
- Total number of images: 48
- Total number of aspect buckets: 5
- Resolution: 0.262144 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### dore-background-768
- Repeats: 22
- Total number of images: 48
- Total number of aspect buckets: 7
- Resolution: 0.589824 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### dore-background-1024
- Repeats: 11
- Total number of images: 48
- Total number of aspect buckets: 2
- Resolution: 1.048576 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### dore-background-1536
- Repeats: 5
- Total number of images: 46
- Total number of aspect buckets: 10
- Resolution: 2.359296 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### dore-background-512-crop
- Repeats: 11
- Total number of images: 48
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
### dore-background-768-crop
- Repeats: 11
- Total number of images: 47
- Total number of aspect buckets: 1
- Resolution: 0.589824 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
### dore-background-512-tight-crop
- Repeats: 11
- Total number of images: 48
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
### dore-background-768-tight-crop
- Repeats: 11
- Total number of images: 47
- Total number of aspect buckets: 1
- Resolution: 0.589824 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
### dore-background-1024-crop
- Repeats: 5
- Total number of images: 47
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
## Inference
```python
import torch
from diffusers import DiffusionPipeline
from lycoris import create_lycoris_from_weights
def download_adapter(repo_id: str):
import os
from huggingface_hub import hf_hub_download
adapter_filename = "pytorch_lora_weights.safetensors"
cache_dir = os.environ.get('HF_PATH', os.path.expanduser('~/.cache/huggingface/hub/models'))
cleaned_adapter_path = repo_id.replace("/", "_").replace("\\", "_").replace(":", "_")
path_to_adapter = os.path.join(cache_dir, cleaned_adapter_path)
path_to_adapter_file = os.path.join(path_to_adapter, adapter_filename)
os.makedirs(path_to_adapter, exist_ok=True)
hf_hub_download(
repo_id=repo_id, filename=adapter_filename, local_dir=path_to_adapter
)
return path_to_adapter_file
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_repo_id = 'mipat12/dore-phase1-4e-4-ss3.0-crops'
adapter_filename = 'pytorch_lora_weights.safetensors'
adapter_file_path = download_adapter(repo_id=adapter_repo_id)
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
lora_scale = 1.0
wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_file_path, pipeline.transformer)
wrapper.merge_to()
prompt = "An astronaut is riding a horse through the jungles of Thailand."
## Optional: quantise the model to save on vram.
## Note: The model was quantised during training, and so it is recommended to do the same during inference time.
from optimum.quanto import quantize, freeze, qint8
quantize(pipeline.transformer, weights=qint8)
freeze(pipeline.transformer)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
image = pipeline(
prompt=prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
width=768,
height=1024,
guidance_scale=4.0,
).images[0]
image.save("output.png", format="PNG")
```
|
FatCat87/31813020-2e0e-42ee-b232-f5fa4e28a89f | FatCat87 | 2025-01-09T04:24:22Z | 13 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-01-09T04:18:58Z | ---
license: mit
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: unsloth/Phi-3-mini-4k-instruct
model-index:
- name: 31813020-2e0e-42ee-b232-f5fa4e28a89f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3-mini-4k-instruct
bf16: auto
datasets:
- data_files:
- c5ccc15ac08967d9_train_data.json
ds_type: json
format: custom
path: c5ccc15ac08967d9_train_data.json
type:
field: null
field_input: null
field_instruction: instruction
field_output: positive_sample
field_system: null
format: null
no_input_format: null
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_sample_packing: false
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: FatCat87/31813020-2e0e-42ee-b232-f5fa4e28a89f
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 2
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: ./outputs/out
pad_to_sequence_len: true
resume_from_checkpoint: null
sample_packing: true
saves_per_epoch: 1
seed: 701
sequence_len: 4096
special_tokens: null
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
val_set_size: 0.1
wandb_entity: fatcat87-taopanda
wandb_log_model: null
wandb_mode: online
wandb_name: 31813020-2e0e-42ee-b232-f5fa4e28a89f
wandb_project: subnet56
wandb_runid: 31813020-2e0e-42ee-b232-f5fa4e28a89f
wandb_watch: null
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/fatcat87-taopanda/subnet56/runs/7w4rbqlp)
# 31813020-2e0e-42ee-b232-f5fa4e28a89f
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 701
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1696 | 0.2222 | 1 | 2.2843 |
| 2.1709 | 0.4444 | 2 | 2.2461 |
| 2.2248 | 0.6667 | 3 | 2.2188 |
| 2.1473 | 0.8889 | 4 | 2.2099 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
philip-hightech/fc9efd11-f9c4-419a-bdac-16dcb3cb66f3 | philip-hightech | 2025-01-09T04:21:55Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"region:us"
]
| null | 2025-01-09T04:12:52Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fc9efd11-f9c4-419a-bdac-16dcb3cb66f3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5c699451d3dc0028_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5c699451d3dc0028_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/fc9efd11-f9c4-419a-bdac-16dcb3cb66f3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/5c699451d3dc0028_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4fd7ad13-9b31-4f42-9994-6d2cc4618ed6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4fd7ad13-9b31-4f42-9994-6d2cc4618ed6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fc9efd11-f9c4-419a-bdac-16dcb3cb66f3
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9671 | 0.0001 | 1 | 3.0385 |
| 2.8004 | 0.0003 | 3 | 3.0328 |
| 2.7988 | 0.0006 | 6 | 2.9445 |
| 2.6493 | 0.0010 | 9 | 2.7230 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dimasik1987/4dd90c4f-1dab-40c5-9832-2e4386dc1207 | dimasik1987 | 2025-01-09T04:20:43Z | 9 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
]
| null | 2025-01-09T04:18:33Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4dd90c4f-1dab-40c5-9832-2e4386dc1207
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c5ccc15ac08967d9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c5ccc15ac08967d9_train_data.json
type:
field_instruction: instruction
field_output: positive_sample
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dimasik1987/4dd90c4f-1dab-40c5-9832-2e4386dc1207
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/c5ccc15ac08967d9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 31813020-2e0e-42ee-b232-f5fa4e28a89f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 31813020-2e0e-42ee-b232-f5fa4e28a89f
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4dd90c4f-1dab-40c5-9832-2e4386dc1207
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0029 | 1 | nan |
| 0.0 | 0.0231 | 8 | nan |
| 0.0 | 0.0461 | 16 | nan |
| 0.0 | 0.0692 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MawaredHR/Mawared_T1 | MawaredHR | 2025-01-09T04:19:38Z | 2,856 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"ar",
"en",
"base_model:arcee-ai/Meraj-Mini",
"base_model:finetune:arcee-ai/Meraj-Mini",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-02T04:08:36Z | ---
base_model: arcee-ai/Meraj-Mini
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- ar
- en
model-index:
- name: MawaredT1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 41.99
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FMawaredT1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 31.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FMawaredT1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 14.58
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FMawaredT1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.3
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FMawaredT1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 18.68
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FMawaredT1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 41.31
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FMawaredT1
name: Open LLM Leaderboard
---

# Bilingual Assistant Model Card
## Overview
This bilingual language model is designed to support seamless text generation and understanding in both Arabic (ar) and English (en). Fine-tuned from the `arcee-ai/Meraj-Mini` base model, it offers robust multilingual capabilities optimized for various applications such as conversational agents, content creation, and multilingual text analysis.
### Key Highlights
- **Multilingual Proficiency:** Designed to handle complex linguistic nuances in both Arabic and English, ensuring high-quality outputs in both languages.
- **Performance Optimization:** Achieved 2x faster training through innovative methods provided by the [Unsloth](https://github.com/unslothai/unsloth) framework and the Hugging Face TRL library.
- **Transformer-Based Architecture:** Utilizes advanced transformer layers to deliver state-of-the-art performance in text generation and inference.
## Development Details
- **Developer:** Daemontatox
- **License:** Licensed under the Apache-2.0, ensuring open accessibility and flexibility for various use cases.
- **Base Model:** The model is a fine-tuned variant of `arcee-ai/Meraj-Mini`.
- **Frameworks Used:**
- [Unsloth](https://github.com/unslothai/unsloth): Enabled faster and more efficient training.
- Hugging Face TRL Library: Provided tools for reinforcement learning fine-tuning, enhancing model responsiveness and accuracy.
## Training Process
The fine-tuning process was conducted with a focus on:
- **Data Diversity:** Leveraged a bilingual corpus to ensure comprehensive language understanding across both supported languages.
- **Optimized Hardware Utilization:** Implemented Unsloth's accelerated training methods, significantly reducing resource consumption and training time.
- **Reinforcement Learning:** Used Hugging Face's TRL library to fine-tune the model's decision-making and response generation capabilities, particularly for conversational and contextual understanding.
## Applications
This model is suited for a variety of real-world applications, including:
1. **Conversational Agents:** Powering bilingual chatbots and virtual assistants for customer support and personal use.
2. **Content Generation:** Assisting in drafting multilingual articles, social media posts, and creative writing.
3. **Translation Support:** Providing context-aware translations and summaries across Arabic and English.
4. **Education:** Enhancing learning platforms by offering bilingual educational content and interactive learning experiences.
## Future Directions
Plans for extending the model's capabilities include:
- **Additional Language Support:** Exploring fine-tuning for additional languages.
- **Domain-Specific Training:** Specializing the model for industries such as healthcare, legal, and technical writing.
- **Optimization for Edge Devices:** Investigating quantization techniques to deploy the model on resource-constrained hardware like mobile devices and IoT platforms.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Daemontatox__MawaredT1-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Daemontatox%2FMawaredT1&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 26.63|
|IFEval (0-Shot) | 41.99|
|BBH (3-Shot) | 31.90|
|MATH Lvl 5 (4-Shot)| 14.58|
|GPQA (0-shot) | 11.30|
|MuSR (0-shot) | 18.68|
|MMLU-PRO (5-shot) | 41.31|
|
pedrojm/modelv2_clasificacioncomentario | pedrojm | 2025-01-09T04:19:25Z | 55 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"region:us"
]
| text-classification | 2025-01-09T04:17:04Z | ---
pipeline_tag: text-classification
--- |
helene-rousset/inkman_t5_large_original_label | helene-rousset | 2025-01-09T04:18:39Z | 99 | 0 | null | [
"safetensors",
"t5",
"region:us"
]
| null | 2025-01-08T21:08:16Z | ---
{}
---
# Model Card
The labels were added to the prompt with no labels.
## Model Details
- Base Model: google-t5/t5-large
- Dataset: cloudwalk-kickass/inkman_label_extention_gpt4o_mini
- Dataset size: 18000
- Revision: 96d1f08
## Training
This model was fine-tuned using the following hyperparameters:
- Learning rate: 1e-05
- Batch size: 16
- Max epochs: 5
- Weight decay: 0.01
## Prompt
- System prompt:
- Prefix Input: You are tasked with analyzing structured credit reports and predicting whether a merchant is likely to repay a future loan. Each report contains detailed features about the merchant's financial performance, platform engagement, risk indicators, and customer interactions. Your goal is to evaluate the provided information and output a binary prediction: 'True' if the merchant is likely to repay the loan, or 'False' if the merchant is unlikely to repay.
- Prefix Output: Based on the above information, the binary repayment outcome is: |
cunghoctienganh/f5666dc3-84fc-47ba-a165-e1510945d1aa | cunghoctienganh | 2025-01-09T04:16:46Z | 11 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:adapter:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-01-09T04:07:20Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-0.5B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f5666dc3-84fc-47ba-a165-e1510945d1aa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-0.5B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 91c3ca0a9cfa85e3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/91c3ca0a9cfa85e3_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/f5666dc3-84fc-47ba-a165-e1510945d1aa
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/91c3ca0a9cfa85e3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9e82b84f-7780-4ba9-94ce-5b1b312df3b7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9e82b84f-7780-4ba9-94ce-5b1b312df3b7
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f5666dc3-84fc-47ba-a165-e1510945d1aa
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.2055 | 0.2758 | 200 | 2.9915 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
anhbn/EraX-VL-7B-V1.5-Openvino-INT4 | anhbn | 2025-01-09T04:15:48Z | 17 | 0 | transformers | [
"transformers",
"openvino",
"qwen2_vl",
"image-text-to-text",
"erax",
"multimodal",
"erax-vl-2B",
"insurance",
"ocr",
"vietnamese",
"bcg",
"visual-question-answering",
"vi",
"en",
"zh",
"arxiv:2308.12966",
"arxiv:2407.10671",
"arxiv:2404.16821",
"arxiv:2404.07922",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| visual-question-answering | 2025-01-09T04:12:59Z | ---
license: apache-2.0
language:
- vi
- en
- zh
base_model:
- Qwen/Qwen2-VL-2B-Instruct
library_name: transformers
tags:
- erax
- multimodal
- erax-vl-2B
- insurance
- ocr
- vietnamese
- bcg
pipeline_tag: visual-question-answering
widget:
- src: images/photo-1-16505057982762025719470.webp
example_title: Test 1
- src: images/vt-don-thuoc-f0-7417.jpeg
example_title: Test 2
---
<p align="left">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/GsQKdaTyn2FFx_cZvVHk3.png" alt="Logo">
</p>
# EraX-VL-7B-V1.5
## Introduction 🎉
Hot on the heels of the popular **<a href="https://huggingface.co/erax-ai/EraX-VL-7B-V1.0" target="_blank">EraX-VL-7B-V1.0 model</a>**, we proudly present **EraX-VL-7B-V1.5**, another robust multimodal model for **OCR (optical character recognition)** and **VQA (visual question-answering)** that excels in various languages 🌍, with a particular focus on Vietnamese 🇻🇳. This model stands out for its precise recognition capabilities across a range of documents 📝, including medical forms 🩺, invoices 🧾, bills of sale 💳, quotes 📄, and medical records 💊. This functionality is expected to be highly beneficial for hospitals 🏥, clinics 💉, insurance companies 🛡️, and other similar applications 📋. Built on the solid foundation of the [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)[1], which we found to be of high quality and fluent in Vietnamese, `EraX-VL-7B-V1.5` has been fine-tuned to enhance its performance. We plan to continue improving and releasing new versions for free, along with sharing performance benchmarks in the near future.
One standing-out feature of **EraX-VL-7B-V1.5** is the capability to do multi-turn Q&A with impressive reasoning capability!
**NOTA BENE**:
- EraX-VL-7B-V1.5 is NOT a typical OCR-only tool likes Tesseract but is a Multimodal LLM-based model. To use it effectively, you may have to **twist your prompt carefully** depending on your tasks.
- This model was NOT finetuned with medical (X-ray) dataset or car accidences (yet). Stay tune for updated version coming up sometime early 2025.
**EraX-VL-7B-V1.5** is a young member of our **EraX's LànhGPT** collection of LLM models.
- **Developed by:**
- Nguyễn Anh Nguyên ([email protected])
- Nguyễn Hồ Nam (BCG)
- Phạm Huỳnh Nhật ([email protected])
- Phạm Đình Thục ([email protected])
- **Funded by:** [Bamboo Capital Group](https://bamboocap.com.vn) and EraX
- **Model type:** Multimodal Transformer with over 7B parameters
- **Languages (NLP):** Primarily Vietnamese with multilingual capabilities
- **License:** Apache 2.0
- **Fine-tuned from:** [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)
- **Prompt examples:** <a href="https://github.com/EraX-JS-Company/erax-vl-7b-v1/blob/main/prompts/Vietnam_popular_prompts.txt" target="_blank">Some popular prompt examples on Github.</a>
## Benchmarks 📊
## 🏆 LeaderBoard
The EraX-VL-7B-V1.5 achieved exceptionally high performance compared to other models of equal size or even **10 times larger, and we open-source**! You can re-run the benchmark at any time.
<table style="width:75%;">
<tr>
<th align="middle" width="300">Models</th>
<td align="middle" width="150"><b>Open-Source</b></td>
<td align="middle" width="300"><b>VI-MTVQA</b></td>
</tr>
<tr>
<th align="middle"><font color=darkred>EraX-VL-7B-V1.5 🥇 </font></th>
<td align="middle">✅</td>
<td align="middle">47.2 </td>
</tr>
<tr>
<th align="middle">Qwen2-VL 72B 🥈 </th>
<td align="middle">✘</td>
<td align="middle">41.6 </td>
</tr>
<tr>
<th align="middle">ViGPT-VL 🥉 </th>
<td align="middle">✘</td>
<td align="middle">39.1 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>EraX-VL-2B-V1.5</font></th>
<td align="middle"> ✅ </td>
<td align="middle">38.2 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>EraX-VL-7B-V1 </font></th>
<td align="middle"> ✅ </td>
<td align="middle">37.6 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>Vintern-1B-V2</font></th>
<td align="middle"> ✅ </td>
<td align="middle">37.4 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>Qwen2-VL 7B </font></th>
<td align="middle"> ✅ </td>
<td align="middle">30.0 </td>
</tr>
<tr>
<th align="middle">Claude3 Opus</th>
<td align="middle">✘</td>
<td align="middle">29.1 </td>
</tr>
<tr>
<th align="middle">GPT-4o mini </th>
<td align="middle"> ✘ </td>
<td align="middle">29.1 </td>
</tr>
<tr>
<th align="middle">GPT-4V</th>
<td align="middle">✘</td>
<td align="middle">28.9 </td>
</tr>
<tr>
<th align="middle">Gemini Ultra</th>
<td align="middle">✘</td>
<td align="middle">28.6 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>InternVL2 76B</font></th>
<td align="middle"> ✅ </td>
<td align="middle">26.9 </td>
</tr>
<tr>
<th align="middle">QwenVL Max</th>
<td align="middle">✘</td>
<td align="middle">23.5 </td>
</tr>
<tr>
<th align="middle">Claude3 Sonnet</th>
<td align="middle">✘</td>
<td align="middle">20.8 </td>
</tr>
<tr>
<th align="middle">QwenVL Plus</th>
<td align="middle">✘</td>
<td align="middle">18.1 </td>
</tr>
<tr>
<th align="middle"><font color=darkred>MiniCPM-V2.5</font></th>
<td align="middle">✅</td>
<td align="middle">15.3 </td>
</tr>
</table>
**The test code for evaluating models in the paper can be found in**: <b><a href="https://github.com/EraX-JS-Company/EraX-MTVQA-Benchmark" target="_blank">EraX-JS-Company/EraX-MTVQA-Benchmark</a></b>
## API trial 🎉
Please contact **[email protected]** for API access inquiry.
## Examples 🧩
### 1. OCR - Optical Character Recognition for Multi-Images
**Example 01: Citizen identification card**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="images/trinhquangduy_front.jpg" width="500" alt="Front View" />
<p>Front View</p>
</div>
<div style="text-align: center; margin: 0 10px;">
<img src="images/trinhquangduy_back.jpg" width="500" alt="Back View" />
<p>Back View</p>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://support.google.com/google-ads/thread/270967947/t%C3%B4i-%C4%91%C3%A3-g%E1%BB%ADi-h%C3%ACnh-%E1%BA%A3nh-c%C4%83n-c%C6%B0%E1%BB%9Bc-c%C3%B4ng-d%C3%A2n-c%E1%BB%A7a-ch%C3%ADnh-t%C3%B4i-%C4%91%E1%BB%83-x%C3%A1c-minh-danh-t%C3%ADnh?hl=vi" target="_blank">Google Support</a>
</p>
```
{
"Số thẻ": "037094012351",
"Họ và tên": "TRỊNH QUANG DUY",
"Ngày sinh": "04/09/1994",
"Giới tính": "Nam",
"Quốc tịch": "Việt Nam",
"Quê quán": "Tân Thành, Kim Sơn, Ninh Bình",
"Nơi thường trú": "Xóm 6\nTân Thành, Kim Sơn, Ninh Bình",
"Có giá trị đến": "04/09/2034",
"Đặc điểm nhân dạng": "sẹo chấm c. 1cm trên đuôi mắt trái",
"Nơi cấp": "CỤC TRƯỞNG CỤC CẢNH SÁT\nQUẢN LÝ HÀNH CHÍNH VỀ TRẬT TỰ XÃ HỘI",
"Ngày cấp": "10/12/2022",
"Cán bộ ký tên": "Nguyễn Quốc Hùng",
"Mã định danh": "IDVNM0940123513037094012351"
}
```
**Example 02: Driver's License**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="images/nguyenvandung_front.png" width="500" alt="Front View" />
<p>Front View</p>
</div>
<div style="text-align: center; margin: 0 10px;">
<img src="images/nguyenvandung_back.png" width="500" alt="Back View" />
<p>Back View</p>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://baophapluat.vn/khoi-to-tai-xe-len-mang-mua-giay-phep-lai-xe-gia-de-chay-xe-post481047.html" target="_blank">Báo Pháp luật</a>
</p>
```
{
"No.":"400116012313"
"Fullname":"NGUYỄN VĂN DŨNG"
"Date_of_birth":"08/06/1979"
"Nationality":"VIỆT NAM"
"Address":"X. Quỳnh Hầu, H. Quỳnh Lưu, T. Nghệ An
Nghệ An, ngày/date 23 tháng/month 04 năm/year 2022"
"Hang_Class":"FC"
"Expires":"23/04/2027"
"Place_of_issue":"Nghệ An"
"Date_of_issue":"ngày/date 23 tháng/month 04 năm/year 2022"
"Signer":"Trần Anh Tuấn"
"Các loại xe được phép":"Ô tô hạng C kéo rơmoóc, đầu kéo kéo sơmi rơmoóc và xe hạng B1, B2, C, FB2 (Motor vehicle of class C with a trailer, semi-trailer truck and vehicles of classes B1, B2, C, FB2)"
"Mã số":""
}
```
**Example 03: Vehicle Registration Certificate**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 0 10px;">
<img src="images/nguyentonnhuan.jpg" width="700"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://vietnamnet.vn/phan-biet-cac-loai-giay-dang-ky-xe-khi-mua-moto-da-qua-su-dung-541341.html" target="_blank">Báo Vietnamnet</a>
</p>
```
{
"Tên chủ xe": "NGUYỄN TÔN NHUẬN",
"Địa chỉ": "KE27 Kp3 P.TTTây Q7",
"Nhãn hiệu": "HONDA",
"Số loại": "DYLAN",
"Màu sơn": "Trắng",
"Năm sản xuất": "2012",
"Số máy": "F03E-0057735",
"Số khung": "SA04F-070410",
"Dung tích": "152",
"Số chỗ ngồi": "02",
"Biển số đăng ký": "59V1-498.89",
"Đăng ký lần đầu ngày": "08/06/2004",
"Chức vụ": "Thượng tá",
"Người ký": "Trần Văn Hiểu"
}
```
**Example 04: Vehicle Registration**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 10 20px;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/w5WCaQ-k9nupRIQYddcpr.jpeg" width="700"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://llumar.com.vn/dang-kiem-xe-o-to/" target="_blank">https://llumar.com.vn</a>
</p>
```
{
"vehicle": {
"registration_number": "30A-072.36",
"vehicle_inspection_number": "2903V-093515",
"type": "ô tô con",
"mark": "MERCEDES-BENZ",
"model_code": "C300 W204",
"engine_number": "27294732096079",
"chassis_number": "RLMGF5EX3DV005333",
"manufactured_year_and_country": "2013, Việt Nam",
"life_time_limit_to": "",
"commercial_use": "",
"modification": ""
},
"specifications": {
"wheel_formula": "4x2",
"wheel_tread": "1521/1512 (mm)",
"overall_dimension": "4650 x 1770 x 1429 (mm)",
"largest_luggage_container_dimension": "",
"wheelbase": "2760 (mm)",
"kerb_mass": "1575 (kg)",
"design_authorized_pay_load": "",
"design_authorized_total_mass": "2090/2090 (kg)",
"design_authorized_towed_mass": "",
"permissible_number_of_pers_carried": "5 chỗ ngồi, 0 chỗ đứng, 0 chỗ nằm",
"type_of_fuel_used": "Xăng",
"engine_displacement": "2996 (cm3)",
"max_output_per_rpm": "170(kW)/6000vph",
"number": "KC-1292285"
},
"inspection_report_number": "2905V-20953/16",
"valid_until": "31/01/2018",
"place_date_of_issue": "Hà Nội, ngày 1 tháng 8 năm 2016",
"inspection_center": "ĐƠN VỊ KIỂM ĐỊNH XE CƠ GIỚI",
"signature": "Ngọc Tuấn",
"equipped_with_tachograph": "",
"inspection_stamp_was_not_issued": "",
"notes": "Biển đăng ký nền trắng"
}
```
**Example 05: Receipt**
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 10 20px;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/40vIbNdM1cFXwQYNHx7Ag.jpeg" width="500"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://tintucketoan.com/cach-viet-hoa-don-hang-hoa-dich-vu-khong-chiu-thue-gtgt/" target="_blank">https://tintucketoan.com/</a>
</p>
```
{
'Mẫu số': '01GKTKT3/001',
'Ký hiệu': 'TC/18P',
'Số': '0000030',
'Họ tên người mua hàng': None,
'Tên đơn vị': 'Công Ty TNHH Kế Toán Hà Nội',
'Mã số thuế': '0106235869',
'Địa chỉ': 'Số 49 Ngõ 322 Lê Trọng Tấn, phường Khương Mai, quận Thanh Xuân, Hà Nội',
'Hình thức thanh toán': 'TM',
'STT': None,
'Tên hàng hóa, dịch vụ': 'Tra cứu phần mềm thư viện pháp luật trực tuyến',
'Đơn vị tính': None,
'Số lượng': None,
'Đơn giá': '168.000',
'Thành tiền': '2.016.000',
'Thuế suất GTGT': None,
'Tiền thuế GTGT': None,
'Tổng cộng tiền thanh toán': '2.016.000',
'Số tiền viết bằng chữ': 'Hai triệu, không trăm mười sáu nghìn đồng',
'Người bán hàng': 'Bùi Văn Hùng',
'Chức vụ người bán hàng': 'TRƯỞNG CHI NHÁNH'
}
```
### 2.1 Image Captioning
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/g5V60A7rI94TH0z3zdSAA.jpeg" width="700"/>
</div>
Hình ảnh là biểu đồ BMI theo tuổi, thể hiện mối quan hệ giữa chỉ số khối cơ thể (BMI) và độ tuổi của trẻ em. Biểu đồ được chia thành các vùng màu khác nhau tương ứng với các mức BMI khác nhau:
* **Vùng màu đỏ:** Chỉ số BMI cao hơn 25, cho thấy tình trạng béo phì.
* **Vùng màu vàng:** Chỉ số BMI nằm trong khoảng từ 18 đến 25, cho thấy nguy cơ béo phì.
* **Vùng màu xanh lá cây nhạt:** Chỉ số BMI nằm trong khoảng từ 16 đến 18, cho thấy sức khỏe dinh dưỡng tốt.
* **Vùng màu xanh lá cây đậm:** Chỉ số BMI thấp hơn 16, cho thấy tình trạng thiếu cân.
Trục tung biểu diễn chỉ số BMI, trục hoành biểu diễn tuổi (tính bằng năm). Đường cong màu xám đậm thể hiện đường chuẩn BMI theo tuổi. Các đường cong này cho thấy sự thay đổi BMI theo thời gian, giúp đánh giá sự phát triển cân nặng của trẻ em. Ví dụ, ở trẻ em dưới 3 tuổi, BMI thường dao động trong vùng thiếu cân hoặc sức khỏe dinh dưỡng tốt. Khi trẻ lớn lên, BMI có xu hướng tăng dần, nhưng tốc độ tăng trưởng có thể khác nhau tùy thuộc vào từng cá nhân. Biểu đồ cũng hiển thị các phần trăm phân vị (Percentile), cho biết tỷ lệ phần trăm trẻ em có BMI thấp hơn hoặc cao hơn so với một nhóm trẻ em cùng độ tuổi. Điều này giúp so sánh BMI của trẻ em với tiêu chuẩn quốc tế.
### 2.2 Image Captioning
<div align="center">
<img src="https://huggingface.co/erax-ai/EraX-VL-7B-V1.5/resolve/main/images/27vid-Gaza-City-Cover-gqmt-videoSixteenByNine1050%20(1).jpg" width="700"/>
</div>
Hình ảnh chụp một cảnh tượng đầy xúc động và bi thảm, dường như diễn ra ở một khu vực nghèo khó, có thể là một khu định cư hoặc khu ổ chuột. Trung tâm của bức ảnh là một chiếc xe đẩy được kéo bởi một con lừa. Trên xe đẩy có một nhóm người, bao gồm một người đàn ông lớn tuổi có vẻ như là người hướng dẫn, một phụ nữ mặc áo choàng đen, một phụ nữ trẻ mặc áo xám, một bé gái nhỏ được che mặt bằng khăn trùm đầu, và một cậu bé mặc áo xanh lá cây. Họ có vẻ như đang di chuyển từ một khu vực bị tàn phá sang một nơi khác. Xung quanh họ là những mảnh vụn, rác thải và những tòa nhà đổ nát, cho thấy một môi trường sống khó khăn và thiếu thốn. Phía sau họ, có một số người khác đang đi bộ hoặc đi xe đạp, một số người đang mang theo hành lý. Bầu trời trong xanh, nhưng không khí chung của bức ảnh là nặng nề và buồn bã. Có thể đây là một cảnh tượng di cư, tị nạn, hoặc một sự kiện thảm khốc nào đó.
### 3. Multi-turns Visual Question Answering and Reasoning
<div style="display: flex; flex-direction: row; align-items: center; justify-content: center;">
<div style="text-align: center; margin: 10 20px;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63d8d8879dfcfa941d4d7cd9/S2UosY_TOOV5XNn3d4_zL.jpeg" width="700"/>
</div>
</div>
<p style="text-align: center; font-size: 12px; color: gray; margin-top: 10px;">
Source: <a href="https://kenh14.vn/bien-cang-thai-trinh-nhap-vien-vi-ngo-doc-dang-dan-to-1-resort-5-sao-thieu-trach-nhiem-voi-khach-hang-20200924214133912.chn" target="_blank">https://kenh14.vn</a>
</p>
- **User**: Hãy cho biết thông tin bệnh nhân
- **Assistant**: Tên: NGUYỄN THỊ THÁI TRINH, Ngày sinh: 09/09/1993, Giới tính: Nữ, Địa chỉ: TÂN ĐỊNH - Q1 - TPHCM, Mã thẻ BHYT: 19169080, Số khám bệnh: 2569501, Mã khu vực (K1/K2/K3): 2
- **User**: Họ được chuẩn đoán bị bệnh gì?
- **Assistant**: Nhiễm trùng đường ruột do vi khuẩn, không xác định loài
- **User**: Người bệnh có tiền sử bệnh không? Nếu có thì là gì?
- **Assistant**: Thiếu Calci do chế độ ăn
- **User**: Bệnh nhân đã thực hiện những xét nghiệm gì?
- **Assistant**: [NQ286] Tổng phân tích tế bào máu ngoại vi (bằng máy đếm laser), [NQ286] Điện giải đồ (Na, K, Cl) [Máu], [NQ286] Đo hoạt độ ALT (GPT) [Máu], [NQ286] Đo hoạt độ AST (GOT) [Máu], [NQ286] Định lượng Calci toàn phần [Máu], [NQ286] Định lượng Creatinin [Máu], [NQ286] Định lượng Glucose [Máu], [NQ286] Định lượng Urê máu [Máu], [NQ286] Siêu âm ổ bụng (gan mật, tụy, lách, thận, bàng quang)
- **User**: Số tiền Vật tư y tế mà người bệnh phải trả là bao nhiêu?
- **Assistant**: 19,708
- **User**: Các loại thuốc dịch truyền người bệnh phải trả là gì, tổng số tiền là bao nhiêu?
- **Assistant**: Atropin Sulfat (Syv2019), Bicefzidim 1g (Syv2019), GONCAL (Syv2019), Lactated Ringer's-500ml (Syv2019), Nước cất pha tiêm 5ml (Syv2019), Sodium Chloride 0.9% -500ml (Syv2019), Vincomid (Syv2019), Vinopa (Syv2019), tổng cộng 45,234 đồng
## Quickstart 🎮
Install the necessary packages:
```curl
python -m pip install git+https://github.com/huggingface/transformers accelerate
python -m pip install qwen-vl-utils
pip install flash-attn --no-build-isolation
```
Then you can use `EraX-VL-7B-V1.5` like this:
```python
import os
import base64
import json
import cv2
import numpy as np
import matplotlib.pyplot as plt
import torch
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model_path = "erax/EraX-VL-7B-V1.5"
model = Qwen2VLForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
attn_implementation="eager", # replace with "flash_attention_2" if your GPU is Ampere architecture
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# processor = AutoProcessor.from_pretrained(model_path)
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
model_path,
min_pixels=min_pixels,
max_pixels=max_pixels,
)
image_path ="image.jpg"
with open(image_path, "rb") as f:
encoded_image = base64.b64encode(f.read())
decoded_image_text = encoded_image.decode('utf-8')
base64_data = f"data:image;base64,{decoded_image_text}"
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": base64_data,
},
{
"type": "text",
"text": "Trích xuất thông tin nội dung từ hình ảnh được cung cấp."
},
],
}
]
# Prepare prompt
tokenized_text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[ tokenized_text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Generation configs
generation_config = model.generation_config
generation_config.do_sample = True
generation_config.temperature = 1.0
generation_config.top_k = 1
generation_config.top_p = 0.9
generation_config.min_p = 0.1
generation_config.best_of = 5
generation_config.max_new_tokens = 2048
generation_config.repetition_penalty = 1.06
# Inference
generated_ids = model.generate(**inputs, generation_config=generation_config)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text[0])
```
## References 📑
[1] Qwen team. Qwen2-VL. 2024.
[2] Bai, Jinze, et al. "Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond." arXiv preprint arXiv:2308.12966 (2023).
[4] Yang, An, et al. "Qwen2 technical report." arXiv preprint arXiv:2407.10671 (2024).
[5] Chen, Zhe, et al. "Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[6] Chen, Zhe, et al. "How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites." arXiv preprint arXiv:2404.16821 (2024).
[7] Tran, Chi, and Huong Le Thanh. "LaVy: Vietnamese Multimodal Large Language Model." arXiv preprint arXiv:2404.07922 (2024).
## Contact 🤝
- For correspondence regarding this work or inquiry for API trial, please contact Nguyễn Anh Nguyên at [[email protected]]([email protected]).
- Follow us on <b><a href="https://github.com/EraX-JS-Company" target="_blank">EraX Github</a></b>
|
PrunaAI/tonyshark-deepdeek-v3-1b-bnb-8bit-smashed | PrunaAI | 2025-01-09T04:14:15Z | 18 | 0 | null | [
"safetensors",
"deepseek_v3",
"pruna-ai",
"custom_code",
"base_model:tonyshark/deepdeek-v3-1b",
"base_model:quantized:tonyshark/deepdeek-v3-1b",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-01-02T02:08:57Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: tonyshark/deepdeek-v3-1b
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo tonyshark/deepdeek-v3-1b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/tonyshark-deepdeek-v3-1b-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("tonyshark/deepdeek-v3-1b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model tonyshark/deepdeek-v3-1b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
mradermacher/MT4-IMUGMA-gemma-2-9B-GGUF | mradermacher | 2025-01-09T04:06:12Z | 270 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT4-IMUGMA-gemma-2-9B",
"base_model:quantized:zelk12/MT4-IMUGMA-gemma-2-9B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-09T03:28:22Z | ---
base_model: zelk12/MT4-IMUGMA-gemma-2-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/zelk12/MT4-IMUGMA-gemma-2-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT4-IMUGMA-gemma-2-9B-GGUF/resolve/main/MT4-IMUGMA-gemma-2-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT4-IMUGMA-gemma-2-9B-GGUF/resolve/main/MT4-IMUGMA-gemma-2-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MT4-IMUGMA-gemma-2-9B-GGUF/resolve/main/MT4-IMUGMA-gemma-2-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT4-IMUGMA-gemma-2-9B-GGUF/resolve/main/MT4-IMUGMA-gemma-2-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT4-IMUGMA-gemma-2-9B-GGUF/resolve/main/MT4-IMUGMA-gemma-2-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT4-IMUGMA-gemma-2-9B-GGUF/resolve/main/MT4-IMUGMA-gemma-2-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT4-IMUGMA-gemma-2-9B-GGUF/resolve/main/MT4-IMUGMA-gemma-2-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT4-IMUGMA-gemma-2-9B-GGUF/resolve/main/MT4-IMUGMA-gemma-2-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT4-IMUGMA-gemma-2-9B-GGUF/resolve/main/MT4-IMUGMA-gemma-2-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT4-IMUGMA-gemma-2-9B-GGUF/resolve/main/MT4-IMUGMA-gemma-2-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MT4-IMUGMA-gemma-2-9B-GGUF/resolve/main/MT4-IMUGMA-gemma-2-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MT4-IMUGMA-gemma-2-9B-GGUF/resolve/main/MT4-IMUGMA-gemma-2-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nomnoos37/stt-v1.4-checkpoint830 | nomnoos37 | 2025-01-09T04:05:58Z | 6 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v3-turbo",
"base_model:adapter:openai/whisper-large-v3-turbo",
"region:us"
]
| null | 2025-01-09T04:05:02Z | ---
base_model: openai/whisper-large-v3-turbo
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
EllieChoi/klue-roberta-base-klue-sts | EllieChoi | 2025-01-09T04:05:46Z | 9 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10501",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-01-09T04:04:53Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10501
- loss:CosineSimilarityLoss
base_model: klue/roberta-base
widget:
- source_sentence: 선생님, 거실이랑 안방 중에 어디에 조명이 들어왔으면 하는거에요?
sentences:
- 네가 조명 켜고 싶은 곳이 안방이니 거실이니?
- 네이버 메일이랑 엔드라이브를 연동하는건 금지야
- 짐을 들고 오르내리는 것은 물론 맨몸으로도 좀 빡셉니다.
- source_sentence: 한적한 것이 도시 생활과는 전혀 달랐습니다.
sentences:
- 또 지역 신용보증기금의 심사를 거쳐 업체당 최대 5000만원까지 보증 지원한다.
- 열차와 고속·시외버스, 항공기, 연안여객선은 최대한 증편하기로 했다.
- 요리에 필요한 양념이 없던것이 아쉬웠습니다
- source_sentence: 북한에서 관리중인 도메인으로 메일을 보내면 안됩니다.
sentences:
- 포항 지역 지진은 얼마나 커?
- 북한 도메인으로 메일을 보내지마세요.
- 만약 당신이 팔레르모에 온다면, 이 집을 정말 추천해요!
- source_sentence: 다음 방문 때는 귀마개를 챙겨갈 예정입니다.
sentences:
- 다음에 또 하와이를 오면 재방문 할 예정입니다.
- 여태 만났던 비앤비숙소 호스트중에 손꼽히는 분이었습니다.
- 2019년 12월부터 1월 사이에 특별통지였는데, 신청일 현재 고용보험에 가입하면 지원을 받을 수 있나요?
- source_sentence: 집 바로 옆에 슈퍼가 있고 무엇보다 집이 조용해요.
sentences:
- 우리 집 바로 옆에 슈퍼마켓이 있는데, 무엇보다도 조용해요.
- 광복절이니 어디 마실 가지 말고 집에서 쉬렴.
- 백일 기념일이 어느 날짜죠?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on klue/roberta-base
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.9936243373055442
name: Pearson Cosine
- type: spearman_cosine
value: 0.9738248100401111
name: Spearman Cosine
---
# SentenceTransformer based on klue/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [klue/roberta-base](https://huggingface.co/klue/roberta-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [klue/roberta-base](https://huggingface.co/klue/roberta-base) <!-- at revision 02f94ba5e3fcb7e2a58a390b8639b0fac974a8da -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'집 바로 옆에 슈퍼가 있고 무엇보다 집이 조용해요.',
'우리 집 바로 옆에 슈퍼마켓이 있는데, 무엇보다도 조용해요.',
'광복절이니 어디 마실 가지 말고 집에서 쉬렴.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.9936 |
| **spearman_cosine** | **0.9738** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,501 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 20.14 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 19.3 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.43</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:--------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>헌법 전문에 ‘5·18민주화운동’을 새기는 것은 5·18을 누구도 훼손하거나 부정할 수 없는 대한민국의 위대한 역사로 자리매김하는 일입니다.</code> | <code>2018년, 저는 ‘5·18민주이념의 계승’을 담은 개헌안을 발의한 바 있습니다.</code> | <code>0.33999999999999997</code> |
| <code>이와함께 코로나19로 촬영·제작이 중단된 한국영화 20여편에 제작지원금을 지원하고, 영화업계 관계자 4000여명의 직업훈련수당도 지급한다.</code> | <code>또한, 그것은 코로나19와 함께 촬영과 제작이 중단된 20개 이상의 한국 영화에 대한 지원을 제공할 것이며, 4,000명의 영화산업 관계자들에게 직업 훈련 수당도 지급할 것입니다.</code> | <code>0.8400000000000001</code> |
| <code>약속장소는 잊지 말고 분명하게 공지하세요.</code> | <code>저녁 일정이 안 잡힌 날짜 이번 주에 있으면 며칠인지 알려주세요.</code> | <code>0.06</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 4
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|
| 0.7610 | 500 | 0.0275 | - |
| 1.0 | 657 | - | 0.9371 |
| 1.5221 | 1000 | 0.0082 | 0.9495 |
| 2.0 | 1314 | - | 0.9587 |
| 2.2831 | 1500 | 0.0051 | - |
| 3.0 | 1971 | - | 0.9691 |
| 3.0441 | 2000 | 0.0035 | 0.9696 |
| 3.8052 | 2500 | 0.0026 | - |
| 4.0 | 2628 | - | 0.9738 |
### Framework Versions
- Python: 3.12.3
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
VERSIL91/ae40af1b-6a66-4bcf-979c-f0acce2b55be | VERSIL91 | 2025-01-09T04:04:32Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:llama3",
"region:us"
]
| null | 2025-01-09T03:55:51Z | ---
library_name: peft
license: llama3
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ae40af1b-6a66-4bcf-979c-f0acce2b55be
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c2b9a34bcf614a3f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c2b9a34bcf614a3f_train_data.json
type:
field_input: sentence2
field_instruction: sentence1
field_output: english
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/ae40af1b-6a66-4bcf-979c-f0acce2b55be
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/c2b9a34bcf614a3f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ae40af1b-6a66-4bcf-979c-f0acce2b55be
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ae40af1b-6a66-4bcf-979c-f0acce2b55be
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ae40af1b-6a66-4bcf-979c-f0acce2b55be
This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1202 | 0.0187 | 1 | 2.1809 |
| 2.1566 | 0.0935 | 5 | 2.0062 |
| 1.3219 | 0.1869 | 10 | 1.2942 |
| 0.8656 | 0.2804 | 15 | 0.9305 |
| 0.8457 | 0.3738 | 20 | 0.8944 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk/3b5a3faf-d22f-4b8b-8302-51c895577c92 | kostiantynk | 2025-01-09T04:04:30Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"region:us"
]
| null | 2025-01-09T03:54:49Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3b5a3faf-d22f-4b8b-8302-51c895577c92
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5c699451d3dc0028_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5c699451d3dc0028_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/3b5a3faf-d22f-4b8b-8302-51c895577c92
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/5c699451d3dc0028_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4fd7ad13-9b31-4f42-9994-6d2cc4618ed6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4fd7ad13-9b31-4f42-9994-6d2cc4618ed6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3b5a3faf-d22f-4b8b-8302-51c895577c92
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9671 | 0.0001 | 1 | 3.0385 |
| 2.7999 | 0.0003 | 3 | 3.0327 |
| 2.7961 | 0.0006 | 6 | 2.9438 |
| 2.6454 | 0.0010 | 9 | 2.7206 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mlburnham/Political_DEBATE_ModernBERT_large_v1.0 | mlburnham | 2025-01-09T03:59:43Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-01-08T22:32:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jonjew/PetiteBody | Jonjew | 2025-01-09T03:59:11Z | 84 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
]
| text-to-image | 2025-01-09T03:58:39Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
colorful sharp 4k HD nude photography of a woman with petite body and small
breasts showing her nipples and her pussy
output:
url: images/petite.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: petite body, small breasts
license: unknown
---
# Petite Body
<Gallery />
## Model description
From https://civitai.com/models/661070/petite-body-type-for-flux-small-breasts?modelVersionId=1196526
## Trigger words
You should use `petite body` to trigger the image generation.
You should use `small breasts` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/PetiteBody/tree/main) them in the Files & versions tab.
|
hamdaanshaikh/ppo-Huggy | hamdaanshaikh | 2025-01-09T03:55:16Z | 55 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2025-01-09T03:55:11Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hamdaanshaikh/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ToastyPigeon/qwen-rp-test-h-qlora-flatlined | ToastyPigeon | 2025-01-09T03:53:17Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"base_model:arcee-ai/Virtuoso-Small",
"base_model:adapter:arcee-ai/Virtuoso-Small",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-01-08T22:13:43Z | ---
base_model: arcee-ai/Virtuoso-Small
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
PrunaAI/Legalaz-12_llambo2_21_53-bnb-8bit-smashed | PrunaAI | 2025-01-09T03:42:40Z | 8 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"base_model:Legalaz/12_llambo2_21_53",
"base_model:quantized:Legalaz/12_llambo2_21_53",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-01-09T03:33:52Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: Legalaz/12_llambo2_21_53
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Legalaz/12_llambo2_21_53 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/Legalaz-12_llambo2_21_53-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("Legalaz/12_llambo2_21_53")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Legalaz/12_llambo2_21_53 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
nhung03/8dcea68d-0798-45eb-8a23-c132228177f2 | nhung03 | 2025-01-09T03:29:45Z | 12 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-01-09T03:13:34Z | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8dcea68d-0798-45eb-8a23-c132228177f2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-4k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ca95c1dd76317914_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ca95c1dd76317914_train_data.json
type:
field_instruction: input
field_output: code_output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/8dcea68d-0798-45eb-8a23-c132228177f2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ca95c1dd76317914_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a9a4defb-80a3-4283-88ae-5b50de731b69
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a9a4defb-80a3-4283-88ae-5b50de731b69
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8dcea68d-0798-45eb-8a23-c132228177f2
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.432 | 0.1074 | 200 | 0.0802 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF | mradermacher | 2025-01-09T03:28:39Z | 1,541 | 1 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"en",
"base_model:huihui-ai/Dolphin3.0-Llama3.1-8B-abliterated",
"base_model:quantized:huihui-ai/Dolphin3.0-Llama3.1-8B-abliterated",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-01-09T02:06:33Z | ---
base_model: huihui-ai/Dolphin3.0-Llama3.1-8B-abliterated
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- abliterated
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/huihui-ai/Dolphin3.0-Llama3.1-8B-abliterated
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin3.0-Llama3.1-8B-abliterated-i1-GGUF/resolve/main/Dolphin3.0-Llama3.1-8B-abliterated.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
duyphu/d191d1e5-202f-29b8-f2c9-95bb6e479ff7 | duyphu | 2025-01-09T03:23:37Z | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"region:us"
]
| null | 2025-01-09T03:07:35Z | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d191d1e5-202f-29b8-f2c9-95bb6e479ff7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0209a590cf394039_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0209a590cf394039_train_data.json
type:
field_input: treatment
field_instruction: disease
field_output: disease_id
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/d191d1e5-202f-29b8-f2c9-95bb6e479ff7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/0209a590cf394039_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ceef632e-f2fa-449b-a77c-22dff3b23ef5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ceef632e-f2fa-449b-a77c-22dff3b23ef5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d191d1e5-202f-29b8-f2c9-95bb6e479ff7
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7041
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 10.1634 |
| 9.0534 | 0.0089 | 10 | 6.8755 |
| 3.5839 | 0.0179 | 20 | 2.9895 |
| 2.7812 | 0.0268 | 30 | 2.7643 |
| 2.7556 | 0.0358 | 40 | 2.7127 |
| 2.669 | 0.0447 | 50 | 2.7041 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
phongtintruong/meomeo-mhubert-vietbud-182-700 | phongtintruong | 2025-01-09T03:22:04Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"meomeo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-01-09T02:59:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CatBarks/t5_baseES_bce1_14 | CatBarks | 2025-01-09T03:20:37Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-01-09T03:17:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
daqc/SmolLM2-FT-DPO-Medicina_es | daqc | 2025-01-09T03:16:02Z | 141 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_2",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-12-31T01:54:53Z | ---
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
library_name: transformers
model_name: SmolLM2-FT-DPO-Medicina_es
tags:
- generated_from_trainer
- smol-course
- module_2
- trl
- dpo
licence: license
---
# Model Card for SmolLM2-FT-DPO-Medicina_es
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="daqc/SmolLM2-FT-DPO-Medicina_es", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/da-qc/SmolLM2-FT-DPO-Medicina_es/runs/83qcp7eu)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.13.0
- Transformers: 4.47.1
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
duskdagger/oi13dsk1n | duskdagger | 2025-01-09T03:15:22Z | 881 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
]
| text-to-image | 2025-01-09T03:15:12Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/gray-tabby-cat-rests-peacefully-260nw-2514370127.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# oi13dsk1n
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/duskdagger/oi13dsk1n/tree/main) them in the Files & versions tab.
|
skr1125/53K_hindi_f16_gguf_model | skr1125 | 2025-01-09T03:12:07Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-01-09T03:08:05Z | ---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** skr1125
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
duskdagger/bu5ty | duskdagger | 2025-01-09T03:12:02Z | 276 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
]
| text-to-image | 2025-01-09T03:11:52Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/attentive_fishing_cat_514198.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# bu5ty
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/duskdagger/bu5ty/tree/main) them in the Files & versions tab.
|
dzanbek/69e69505-241c-4d0a-8917-71f936e3b1df | dzanbek | 2025-01-09T03:11:55Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
]
| null | 2025-01-09T03:09:00Z | ---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 69e69505-241c-4d0a-8917-71f936e3b1df
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: huggyllama/llama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7a0dcf2cd449adfe_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7a0dcf2cd449adfe_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dzanbek/69e69505-241c-4d0a-8917-71f936e3b1df
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/7a0dcf2cd449adfe_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b70af139-fd5f-4c53-91c7-aafc7cf0c6d0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b70af139-fd5f-4c53-91c7-aafc7cf0c6d0
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 69e69505-241c-4d0a-8917-71f936e3b1df
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0031 | 1 | nan |
| 0.0 | 0.0245 | 8 | nan |
| 0.0 | 0.0490 | 16 | nan |
| 0.0 | 0.0735 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Legalaz/12_llambo2_22_07 | Legalaz | 2025-01-09T03:10:43Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-09T03:08:42Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# top
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* /root/top2
* /root/top1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /root/top2
parameters:
weight: 0.9007
- model: /root/top1
parameters:
weight: 0.0628
merge_method: linear
dtype: bfloat16
```
|
mlburnham/Political_DEBATE_ModernBERT_base_v1.0 | mlburnham | 2025-01-09T03:08:28Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-01-08T22:29:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ahmed107/whisper-small-ar-eos-v8-eos-v12-1.7 | Ahmed107 | 2025-01-09T03:04:42Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"generated_from_trainer",
"base_model:Ahmed107/whisper-small-ar-eos-v8",
"base_model:finetune:Ahmed107/whisper-small-ar-eos-v8",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2025-01-08T20:46:42Z | ---
library_name: transformers
base_model: Ahmed107/whisper-small-ar-eos-v8
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: whisper-small-ar-eos-v8-eos-v12-1.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ar-eos-v8-eos-v12-1.7
This model is a fine-tuned version of [Ahmed107/whisper-small-ar-eos-v8](https://huggingface.co/Ahmed107/whisper-small-ar-eos-v8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1353
- Accuracy: 0.6769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6284 | 1.0 | 506 | 0.6770 | 0.5022 |
| 0.6035 | 2.0 | 1012 | 0.6186 | 0.6830 |
| 0.5059 | 3.0 | 1518 | 0.6554 | 0.6674 |
| 0.582 | 4.0 | 2024 | 0.5851 | 0.6959 |
| 0.4402 | 5.0 | 2530 | 0.6048 | 0.6975 |
| 0.3958 | 6.0 | 3036 | 0.6634 | 0.6948 |
| 0.3674 | 7.0 | 3542 | 0.8059 | 0.6948 |
| 0.1777 | 8.0 | 4048 | 1.0172 | 0.6864 |
| 0.1309 | 9.0 | 4554 | 1.1437 | 0.6747 |
| 0.0751 | 10.0 | 5060 | 1.5102 | 0.6763 |
| 0.0493 | 11.0 | 5566 | 1.9769 | 0.6680 |
| 0.0371 | 12.0 | 6072 | 2.3622 | 0.6797 |
| 0.0081 | 13.0 | 6578 | 2.5350 | 0.6802 |
| 0.0002 | 14.0 | 7084 | 2.8013 | 0.6730 |
| 0.0178 | 15.0 | 7590 | 2.9604 | 0.6663 |
| 0.0001 | 16.0 | 8096 | 3.0905 | 0.6652 |
| 0.0001 | 17.0 | 8602 | 3.0268 | 0.6669 |
| 0.0001 | 18.0 | 9108 | 2.8729 | 0.6853 |
| 0.0 | 19.0 | 9614 | 2.9037 | 0.6847 |
| 0.0171 | 20.0 | 10120 | 2.8267 | 0.6808 |
| 0.0 | 21.0 | 10626 | 2.8189 | 0.6814 |
| 0.0 | 22.0 | 11132 | 2.9142 | 0.6791 |
| 0.0001 | 23.0 | 11638 | 2.8557 | 0.6814 |
| 0.0 | 24.0 | 12144 | 3.0361 | 0.6724 |
| 0.0 | 25.0 | 12650 | 3.0190 | 0.6713 |
| 0.0 | 26.0 | 13156 | 3.0526 | 0.6752 |
| 0.0 | 27.0 | 13662 | 3.0976 | 0.6802 |
| 0.0 | 28.0 | 14168 | 3.1136 | 0.6713 |
| 0.0 | 29.0 | 14674 | 3.1319 | 0.6758 |
| 0.0 | 30.0 | 15180 | 3.1353 | 0.6769 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
zajacowy/MAJKALORA | zajacowy | 2025-01-09T03:01:56Z | 16 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-01-08T22:45:52Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MAJKALORA
---
# Majkalora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MAJKALORA` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('zajacowy/MAJKALORA', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
tarsssss/eng-jagoy-t5-001 | tarsssss | 2025-01-09T03:01:25Z | 35 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-12-26T03:17:58Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: tarsssss/eng-jagoy-t5-001
results: []
library_name: transformers
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tarsssss/eng-jagoy-t5-001
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.7399
- Validation Loss: 5.1356
- Epoch: 138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.8603 | 7.4105 | 0 |
| 7.3775 | 7.1273 | 1 |
| 7.1632 | 6.9598 | 2 |
| 7.0228 | 6.8372 | 3 |
| 6.9085 | 6.7335 | 4 |
| 6.8226 | 6.6458 | 5 |
| 6.7451 | 6.5671 | 6 |
| 6.6785 | 6.5022 | 7 |
| 6.6254 | 6.4409 | 8 |
| 6.5606 | 6.3842 | 9 |
| 6.5163 | 6.3361 | 10 |
| 6.4682 | 6.2908 | 11 |
| 6.4250 | 6.2436 | 12 |
| 6.3749 | 6.1907 | 13 |
| 6.3293 | 6.1494 | 14 |
| 6.2822 | 6.1098 | 15 |
| 6.2560 | 6.0750 | 16 |
| 6.2078 | 6.0508 | 17 |
| 6.1839 | 6.0229 | 18 |
| 6.1561 | 5.9944 | 19 |
| 6.1146 | 5.9732 | 20 |
| 6.0885 | 5.9490 | 21 |
| 6.0587 | 5.9243 | 22 |
| 6.0366 | 5.9064 | 23 |
| 6.0135 | 5.8857 | 24 |
| 5.9904 | 5.8675 | 25 |
| 5.9681 | 5.8482 | 26 |
| 5.9473 | 5.8262 | 27 |
| 5.9263 | 5.8127 | 28 |
| 5.9031 | 5.7896 | 29 |
| 5.8827 | 5.7721 | 30 |
| 5.8566 | 5.7482 | 31 |
| 5.8406 | 5.7355 | 32 |
| 5.8285 | 5.7231 | 33 |
| 5.7944 | 5.7049 | 34 |
| 5.7822 | 5.6968 | 35 |
| 5.7567 | 5.6813 | 36 |
| 5.7526 | 5.6650 | 37 |
| 5.7363 | 5.6614 | 38 |
| 5.7132 | 5.6398 | 39 |
| 5.6945 | 5.6383 | 40 |
| 5.6786 | 5.6243 | 41 |
| 5.6636 | 5.6071 | 42 |
| 5.6527 | 5.5955 | 43 |
| 5.6390 | 5.5876 | 44 |
| 5.6198 | 5.5754 | 45 |
| 5.6082 | 5.5663 | 46 |
| 5.6070 | 5.5572 | 47 |
| 5.5782 | 5.5493 | 48 |
| 5.5679 | 5.5487 | 49 |
| 5.5520 | 5.5301 | 50 |
| 5.5307 | 5.5261 | 51 |
| 5.5284 | 5.5089 | 52 |
| 5.5160 | 5.5003 | 53 |
| 5.4976 | 5.4981 | 54 |
| 5.4864 | 5.4860 | 55 |
| 5.4795 | 5.4816 | 56 |
| 5.4653 | 5.4652 | 57 |
| 5.4484 | 5.4639 | 58 |
| 5.4335 | 5.4580 | 59 |
| 5.4231 | 5.4454 | 60 |
| 5.4132 | 5.4358 | 61 |
| 5.4064 | 5.4349 | 62 |
| 5.3886 | 5.4261 | 63 |
| 5.3913 | 5.4193 | 64 |
| 5.3692 | 5.4138 | 65 |
| 5.3556 | 5.4028 | 66 |
| 5.3469 | 5.4001 | 67 |
| 5.3421 | 5.3942 | 68 |
| 5.3194 | 5.3826 | 69 |
| 5.3243 | 5.3799 | 70 |
| 5.3081 | 5.3713 | 71 |
| 5.2921 | 5.3737 | 72 |
| 5.2845 | 5.3681 | 73 |
| 5.2754 | 5.3601 | 74 |
| 5.2594 | 5.3524 | 75 |
| 5.2527 | 5.3420 | 76 |
| 5.2496 | 5.3367 | 77 |
| 5.2360 | 5.3320 | 78 |
| 5.2193 | 5.3253 | 79 |
| 5.2141 | 5.3178 | 80 |
| 5.1993 | 5.3150 | 81 |
| 5.1923 | 5.3157 | 82 |
| 5.1875 | 5.3097 | 83 |
| 5.1776 | 5.3051 | 84 |
| 5.1693 | 5.3050 | 85 |
| 5.1533 | 5.3115 | 86 |
| 5.1567 | 5.2943 | 87 |
| 5.1348 | 5.2757 | 88 |
| 5.1317 | 5.2849 | 89 |
| 5.1191 | 5.2846 | 90 |
| 5.1102 | 5.2742 | 91 |
| 5.1054 | 5.2725 | 92 |
| 5.0944 | 5.2624 | 93 |
| 5.0906 | 5.2560 | 94 |
| 5.0712 | 5.2502 | 95 |
| 5.0719 | 5.2495 | 96 |
| 5.0628 | 5.2498 | 97 |
| 5.0597 | 5.2454 | 98 |
| 5.0402 | 5.2420 | 99 |
| 5.0308 | 5.2441 | 100 |
| 5.0193 | 5.2379 | 101 |
| 5.0198 | 5.2298 | 102 |
| 5.0110 | 5.2315 | 103 |
| 5.0087 | 5.2304 | 104 |
| 4.9906 | 5.2261 | 105 |
| 4.9883 | 5.2288 | 106 |
| 4.9818 | 5.2069 | 107 |
| 4.9612 | 5.2003 | 108 |
| 4.9560 | 5.2009 | 109 |
| 4.9453 | 5.2123 | 110 |
| 4.9385 | 5.2136 | 111 |
| 4.9238 | 5.2178 | 112 |
| 4.9291 | 5.1994 | 113 |
| 4.9097 | 5.1940 | 114 |
| 4.9093 | 5.1840 | 115 |
| 4.9057 | 5.1824 | 116 |
| 4.8907 | 5.1894 | 117 |
| 4.8919 | 5.1841 | 118 |
| 4.8699 | 5.1806 | 119 |
| 4.8671 | 5.1795 | 120 |
| 4.8629 | 5.1696 | 121 |
| 4.8552 | 5.1646 | 122 |
| 4.8414 | 5.1709 | 123 |
| 4.8444 | 5.1534 | 124 |
| 4.8330 | 5.1698 | 125 |
| 4.8231 | 5.1501 | 126 |
| 4.8198 | 5.1565 | 127 |
| 4.8004 | 5.1522 | 128 |
| 4.7996 | 5.1478 | 129 |
| 4.7915 | 5.1409 | 130 |
| 4.7845 | 5.1484 | 131 |
| 4.7837 | 5.1476 | 132 |
| 4.7727 | 5.1446 | 133 |
| 4.7729 | 5.1379 | 134 |
| 4.7628 | 5.1379 | 135 |
| 4.7568 | 5.1359 | 136 |
| 4.7400 | 5.1292 | 137 |
| 4.7399 | 5.1356 | 138 |
### Framework versions
- Transformers 4.33.2
- TensorFlow 2.10.0
- Datasets 2.15.0
- Tokenizers 0.13.3 |
Ahmed107/whisper-small-ar-eos-v8-eos-v12-2.5 | Ahmed107 | 2025-01-09T02:59:22Z | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"generated_from_trainer",
"base_model:Ahmed107/whisper-small-ar-eos-v8",
"base_model:finetune:Ahmed107/whisper-small-ar-eos-v8",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2025-01-08T20:46:42Z | ---
library_name: transformers
base_model: Ahmed107/whisper-small-ar-eos-v8
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: whisper-small-ar-eos-v8-eos-v12-2.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ar-eos-v8-eos-v12-2.5
This model is a fine-tuned version of [Ahmed107/whisper-small-ar-eos-v8](https://huggingface.co/Ahmed107/whisper-small-ar-eos-v8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3173
- Accuracy: 0.6657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5805 | 1.0 | 506 | 0.7427 | 0.5 |
| 0.5596 | 2.0 | 1012 | 0.6658 | 0.6306 |
| 0.4595 | 3.0 | 1518 | 0.7195 | 0.6217 |
| 0.5451 | 4.0 | 2024 | 0.6221 | 0.6791 |
| 0.4334 | 5.0 | 2530 | 0.6253 | 0.6936 |
| 0.3446 | 6.0 | 3036 | 0.7335 | 0.6730 |
| 0.3106 | 7.0 | 3542 | 0.8420 | 0.6702 |
| 0.1988 | 8.0 | 4048 | 1.0634 | 0.6842 |
| 0.1554 | 9.0 | 4554 | 1.2181 | 0.6747 |
| 0.0698 | 10.0 | 5060 | 1.5398 | 0.6802 |
| 0.1746 | 11.0 | 5566 | 1.8350 | 0.6652 |
| 0.0639 | 12.0 | 6072 | 2.3253 | 0.6602 |
| 0.0004 | 13.0 | 6578 | 2.4828 | 0.6663 |
| 0.0338 | 14.0 | 7084 | 2.5842 | 0.6758 |
| 0.0006 | 15.0 | 7590 | 2.7395 | 0.6713 |
| 0.0002 | 16.0 | 8096 | 2.7720 | 0.6669 |
| 0.0001 | 17.0 | 8602 | 2.7865 | 0.6685 |
| 0.0002 | 18.0 | 9108 | 2.8497 | 0.6607 |
| 0.0002 | 19.0 | 9614 | 2.8792 | 0.6607 |
| 0.008 | 20.0 | 10120 | 2.9188 | 0.6691 |
| 0.0001 | 21.0 | 10626 | 2.7606 | 0.6920 |
| 0.0051 | 22.0 | 11132 | 3.0888 | 0.6669 |
| 0.0 | 23.0 | 11638 | 3.0709 | 0.6747 |
| 0.0 | 24.0 | 12144 | 3.1538 | 0.6730 |
| 0.0 | 25.0 | 12650 | 3.2574 | 0.6646 |
| 0.0 | 26.0 | 13156 | 3.2662 | 0.6663 |
| 0.0 | 27.0 | 13662 | 3.2635 | 0.6685 |
| 0.0 | 28.0 | 14168 | 3.2895 | 0.6669 |
| 0.0 | 29.0 | 14674 | 3.3094 | 0.6652 |
| 0.0 | 30.0 | 15180 | 3.3173 | 0.6657 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Legalaz/12_llambo2_21_53 | Legalaz | 2025-01-09T02:56:33Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-09T02:54:30Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# top
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* /root/top2
* /root/top1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /root/top2
parameters:
weight: 0.9853
- model: /root/top1
parameters:
weight: 0.0628
merge_method: linear
dtype: bfloat16
```
|
skr1125/53K_hindi_merged_16bit_model | skr1125 | 2025-01-09T02:56:30Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-01-09T02:51:20Z | ---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** skr1125
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
duyphu/b917c3e7-9c8a-e445-b761-f05ab8f673ac | duyphu | 2025-01-09T02:56:16Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"base_model:adapter:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2",
"license:gemma",
"region:us"
]
| null | 2025-01-09T02:41:38Z | ---
library_name: peft
license: gemma
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b917c3e7-9c8a-e445-b761-f05ab8f673ac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b0c89329a22046df_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b0c89329a22046df_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/b917c3e7-9c8a-e445-b761-f05ab8f673ac
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 1
micro_batch_size: 2
mlflow_experiment_name: /tmp/b0c89329a22046df_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 120faf85-b5cd-46b3-83f0-5b4ca1c2d4f2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 120faf85-b5cd-46b3-83f0-5b4ca1c2d4f2
warmup_steps: 1
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b917c3e7-9c8a-e445-b761-f05ab8f673ac
This model is a fine-tuned version of [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- training_steps: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 2.4239 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
davidrd123/MarcoBackground-SimpleTrigger-Dev2Pro-QuarterEighthCrops-Flux-LoKr | davidrd123 | 2025-01-09T02:51:02Z | 128 | 0 | diffusers | [
"diffusers",
"flux",
"flux-diffusers",
"text-to-image",
"simpletuner",
"safe-for-work",
"lora",
"template:sd-lora",
"lycoris",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-01-05T03:24:58Z | ---
license: other
base_model: "black-forest-labs/FLUX.1-dev"
tags:
- flux
- flux-diffusers
- text-to-image
- diffusers
- simpletuner
- safe-for-work
- lora
- template:sd-lora
- lycoris
inference: true
widget:
- text: 'unconditional (blank prompt)'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_0_0.png
- text: 'In the style of m4rc0 anime background paintings, A series of industrial machines are arranged in rows inside a large, spacious warehouse. Bright natural light streams in from expansive windows, casting shadows across the wooden floor. The interior is filled with structural elements like beams and supports, suggesting a manufacturing environment.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_1_0.png
- text: 'In the style of m4rc0 anime background paintings, A moonlit alley with clothes hanging on a line and dimly lit buildings. The sky is overcast with clouds partially covering the moon. Balconies and beams create shadows across the scene.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_2_0.png
- text: 'In the style of m4rc0 anime background paintings, A dark, starry night sky with swirling clouds over a mountainous landscape. A small, illuminated caravan sits in an open field dotted with white flowers.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_3_0.png
- text: 'In the style of m4rc0 anime background paintings, A green chalkboard with handwritten text partially covered by shadows cast from a window. The window frame and sunlight create distinct lines and patterns on the board. Artwork pages are pinned at the top.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_4_0.png
- text: 'In the style of m4rc0 anime background paintings, A misty morning harbor with fishing boats gently bobbing in the water. The rising sun casts long shadows across weathered wooden docks, while seabirds circle overhead. Stacked crates and coiled ropes line the pier.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_5_0.png
- text: 'In the style of m4rc0 anime background paintings, A two-story library interior with spiral staircases and towering wooden bookshelves. Autumn sunlight filters through stained glass windows, creating colorful patterns on leather armchairs and scattered open books.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_6_0.png
- text: 'In the style of m4rc0 anime background paintings, An abandoned Victorian greenhouse with broken glass panels and overgrown vines. Shafts of afternoon light pierce through the dusty air, illuminating scattered terra cotta pots and rusted gardening tools.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_7_0.png
- text: 'In the style of m4rc0 anime background paintings, A rural train platform at dusk with a wooden waiting shelter. Paper lanterns cast a warm glow on the wooden planks, while steam from a distant locomotive drifts across the purple-orange sky.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_8_0.png
---
# MarcoBackground-SimpleTrigger-Dev2Pro-QuarterEighthCrops-Flux-LoKr
This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
No validation prompt was used during training.
None
## Validation settings
- CFG: `3.0`
- CFG Rescale: `0.0`
- Steps: `20`
- Sampler: `FlowMatchEulerDiscreteScheduler`
- Seed: `42`
- Resolution: `1408x768`
- Skip-layer guidance:
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 6
- Training steps: 8750
- Learning rate: 8e-05
- Learning rate schedule: constant
- Warmup steps: 100
- Max grad norm: 0.1
- Effective batch size: 3
- Micro-batch size: 3
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Gradient checkpointing: True
- Prediction type: flow-matching (extra parameters=['shift=3', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flow_matching_loss=compatible'])
- Optimizer: adamw_bf16
- Trainable parameter precision: Pure BF16
- Caption dropout probability: 10.0%
- SageAttention: Enabled inference
### LyCORIS Config:
```json
{
"algo": "lokr",
"multiplier": 1.0,
"linear_dim": 10000,
"linear_alpha": 1,
"factor": 16,
"apply_preset": {
"target_module": [
"Attention",
"FeedForward"
],
"module_algo_map": {
"Attention": {
"factor": 16
},
"FeedForward": {
"factor": 8
}
}
}
}
```
## Datasets
### marco-background-512
- Repeats: 22
- Total number of images: 34
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### marco-background-768
- Repeats: 22
- Total number of images: 34
- Total number of aspect buckets: 1
- Resolution: 0.589824 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### marco-background-1024
- Repeats: 11
- Total number of images: 34
- Total number of aspect buckets: 3
- Resolution: 1.048576 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### marco-background-1536
- Repeats: 5
- Total number of images: 34
- Total number of aspect buckets: 4
- Resolution: 2.359296 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
- Used for regularisation data: No
### marco-background-512-crop
- Repeats: 11
- Total number of images: 34
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
### marco-background-768-crop
- Repeats: 11
- Total number of images: 34
- Total number of aspect buckets: 1
- Resolution: 0.589824 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
### marco-background-512-tight-crop
- Repeats: 11
- Total number of images: 34
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
### marco-background-768-tight-crop
- Repeats: 11
- Total number of images: 34
- Total number of aspect buckets: 1
- Resolution: 0.589824 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
### marco-background-1024-crop
- Repeats: 5
- Total number of images: 34
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
- Used for regularisation data: No
## Inference
```python
import torch
from diffusers import DiffusionPipeline
from lycoris import create_lycoris_from_weights
def download_adapter(repo_id: str):
import os
from huggingface_hub import hf_hub_download
adapter_filename = "pytorch_lora_weights.safetensors"
cache_dir = os.environ.get('HF_PATH', os.path.expanduser('~/.cache/huggingface/hub/models'))
cleaned_adapter_path = repo_id.replace("/", "_").replace("\\", "_").replace(":", "_")
path_to_adapter = os.path.join(cache_dir, cleaned_adapter_path)
path_to_adapter_file = os.path.join(path_to_adapter, adapter_filename)
os.makedirs(path_to_adapter, exist_ok=True)
hf_hub_download(
repo_id=repo_id, filename=adapter_filename, local_dir=path_to_adapter
)
return path_to_adapter_file
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_repo_id = 'davidrd123/MarcoBackground-SimpleTrigger-Dev2Pro-QuarterEighthCrops-Flux-LoKr'
adapter_filename = 'pytorch_lora_weights.safetensors'
adapter_file_path = download_adapter(repo_id=adapter_repo_id)
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
lora_scale = 1.0
wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_file_path, pipeline.transformer)
wrapper.merge_to()
prompt = "An astronaut is riding a horse through the jungles of Thailand."
## Optional: quantise the model to save on vram.
## Note: The model was quantised during training, and so it is recommended to do the same during inference time.
from optimum.quanto import quantize, freeze, qint8
quantize(pipeline.transformer, weights=qint8)
freeze(pipeline.transformer)
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
image = pipeline(
prompt=prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
width=1408,
height=768,
guidance_scale=3.0,
).images[0]
image.save("output.png", format="PNG")
```
|
mradermacher/MT1-IF-gemma-2-RAt0.25v0.1GD-9B-GGUF | mradermacher | 2025-01-09T02:50:44Z | 137 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT1-IF-gemma-2-RAt0.25v0.1GD-9B",
"base_model:quantized:zelk12/MT1-IF-gemma-2-RAt0.25v0.1GD-9B",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-08T18:32:30Z | ---
base_model: zelk12/MT1-IF-gemma-2-RAt0.25v0.1GD-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/zelk12/MT1-IF-gemma-2-RAt0.25v0.1GD-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT1-IF-gemma-2-RAt0.25v0.1GD-9B-GGUF/resolve/main/MT1-IF-gemma-2-RAt0.25v0.1GD-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-IF-gemma-2-RAt0.25v0.1GD-9B-GGUF/resolve/main/MT1-IF-gemma-2-RAt0.25v0.1GD-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-IF-gemma-2-RAt0.25v0.1GD-9B-GGUF/resolve/main/MT1-IF-gemma-2-RAt0.25v0.1GD-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-IF-gemma-2-RAt0.25v0.1GD-9B-GGUF/resolve/main/MT1-IF-gemma-2-RAt0.25v0.1GD-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-IF-gemma-2-RAt0.25v0.1GD-9B-GGUF/resolve/main/MT1-IF-gemma-2-RAt0.25v0.1GD-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-IF-gemma-2-RAt0.25v0.1GD-9B-GGUF/resolve/main/MT1-IF-gemma-2-RAt0.25v0.1GD-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT1-IF-gemma-2-RAt0.25v0.1GD-9B-GGUF/resolve/main/MT1-IF-gemma-2-RAt0.25v0.1GD-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT1-IF-gemma-2-RAt0.25v0.1GD-9B-GGUF/resolve/main/MT1-IF-gemma-2-RAt0.25v0.1GD-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-IF-gemma-2-RAt0.25v0.1GD-9B-GGUF/resolve/main/MT1-IF-gemma-2-RAt0.25v0.1GD-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT1-IF-gemma-2-RAt0.25v0.1GD-9B-GGUF/resolve/main/MT1-IF-gemma-2-RAt0.25v0.1GD-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-IF-gemma-2-RAt0.25v0.1GD-9B-GGUF/resolve/main/MT1-IF-gemma-2-RAt0.25v0.1GD-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MT1-IF-gemma-2-RAt0.25v0.1GD-9B-GGUF/resolve/main/MT1-IF-gemma-2-RAt0.25v0.1GD-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Hjgugugjhuhjggg/mergekit-passthrough-cijkogj-Q2_K-GGUF | Hjgugugjhuhjggg | 2025-01-09T02:47:26Z | 28 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Hjgugugjhuhjggg/mergekit-passthrough-cijkogj",
"base_model:quantized:Hjgugugjhuhjggg/mergekit-passthrough-cijkogj",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-01-09T02:47:17Z | ---
base_model: Hjgugugjhuhjggg/mergekit-passthrough-cijkogj
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Hjgugugjhuhjggg/mergekit-passthrough-cijkogj-Q2_K-GGUF
This model was converted to GGUF format from [`Hjgugugjhuhjggg/mergekit-passthrough-cijkogj`](https://huggingface.co/Hjgugugjhuhjggg/mergekit-passthrough-cijkogj) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Hjgugugjhuhjggg/mergekit-passthrough-cijkogj) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Hjgugugjhuhjggg/mergekit-passthrough-cijkogj-Q2_K-GGUF --hf-file mergekit-passthrough-cijkogj-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Hjgugugjhuhjggg/mergekit-passthrough-cijkogj-Q2_K-GGUF --hf-file mergekit-passthrough-cijkogj-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Hjgugugjhuhjggg/mergekit-passthrough-cijkogj-Q2_K-GGUF --hf-file mergekit-passthrough-cijkogj-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Hjgugugjhuhjggg/mergekit-passthrough-cijkogj-Q2_K-GGUF --hf-file mergekit-passthrough-cijkogj-q2_k.gguf -c 2048
```
|
chauhoang/468937b2-aa83-964a-36f4-d5bb59920ee7 | chauhoang | 2025-01-09T02:44:09Z | 20 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
]
| null | 2025-01-09T02:29:18Z | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 468937b2-aa83-964a-36f4-d5bb59920ee7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8d3062d48342f5b7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8d3062d48342f5b7_train_data.json
type:
field_instruction: Question
field_output: Answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/468937b2-aa83-964a-36f4-d5bb59920ee7
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/8d3062d48342f5b7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7f723e7a-12ca-4d9d-b24b-4fc738f9395d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7f723e7a-12ca-4d9d-b24b-4fc738f9395d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 468937b2-aa83-964a-36f4-d5bb59920ee7
This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.3333 |
| 3.1732 | 0.0012 | 10 | 3.2253 |
| 3.0367 | 0.0024 | 20 | 3.0358 |
| 3.072 | 0.0036 | 30 | 2.9901 |
| 3.0583 | 0.0049 | 40 | 2.9766 |
| 2.9839 | 0.0061 | 50 | 2.9737 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
SicariusSicariiStuff/Impish_LLAMA_6.84B_iMatrix | SicariusSicariiStuff | 2025-01-09T02:44:00Z | 260 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-01-09T02:31:33Z | ---
license: apache-2.0
---
|
nbninh/5eb34a9d-c8f1-44d1-a53b-3867996f6a14 | nbninh | 2025-01-09T02:36:40Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:adapter:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-01-09T02:25:45Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5eb34a9d-c8f1-44d1-a53b-3867996f6a14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb03151dde95ae5b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb03151dde95ae5b_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/5eb34a9d-c8f1-44d1-a53b-3867996f6a14
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/fb03151dde95ae5b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5eb34a9d-c8f1-44d1-a53b-3867996f6a14
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5eb34a9d-c8f1-44d1-a53b-3867996f6a14
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5eb34a9d-c8f1-44d1-a53b-3867996f6a14
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4102 | 0.0886 | 200 | 2.0430 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits