modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 00:42:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 00:40:00
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Kuongan/BantuBERTa-vmw-finetuned | Kuongan | 2025-01-29T05:14:20Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:dsfsi/BantuBERTa",
"base_model:finetune:dsfsi/BantuBERTa",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-29T05:07:49Z | ---
library_name: transformers
license: cc-by-4.0
base_model: dsfsi/BantuBERTa
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: BantuBERTa-vmw-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BantuBERTa-vmw-finetuned
This model is a fine-tuned version of [dsfsi/BantuBERTa](https://huggingface.co/dsfsi/BantuBERTa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3269
- F1: 0.1633
- Roc Auc: 0.5507
- Accuracy: 0.4729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4095 | 1.0 | 66 | 0.3112 | 0.0 | 0.5 | 0.4612 |
| 0.2908 | 2.0 | 132 | 0.2903 | 0.0 | 0.5 | 0.4612 |
| 0.284 | 3.0 | 198 | 0.2882 | 0.0 | 0.5 | 0.4612 |
| 0.2958 | 4.0 | 264 | 0.2879 | 0.0 | 0.5 | 0.4612 |
| 0.2749 | 5.0 | 330 | 0.2851 | 0.0139 | 0.5033 | 0.4651 |
| 0.265 | 6.0 | 396 | 0.2868 | 0.0574 | 0.5173 | 0.4729 |
| 0.2281 | 7.0 | 462 | 0.2802 | 0.1073 | 0.5338 | 0.4845 |
| 0.1963 | 8.0 | 528 | 0.2876 | 0.1109 | 0.5353 | 0.4651 |
| 0.1749 | 9.0 | 594 | 0.3040 | 0.0966 | 0.5282 | 0.4496 |
| 0.1534 | 10.0 | 660 | 0.3046 | 0.1322 | 0.5396 | 0.4612 |
| 0.1336 | 11.0 | 726 | 0.3129 | 0.1602 | 0.5473 | 0.4690 |
| 0.1194 | 12.0 | 792 | 0.3222 | 0.1287 | 0.5372 | 0.4612 |
| 0.1106 | 13.0 | 858 | 0.3269 | 0.1633 | 0.5507 | 0.4729 |
| 0.1004 | 14.0 | 924 | 0.3296 | 0.1269 | 0.5360 | 0.4496 |
| 0.091 | 15.0 | 990 | 0.3364 | 0.1612 | 0.5513 | 0.4690 |
| 0.0855 | 16.0 | 1056 | 0.3347 | 0.1570 | 0.5472 | 0.4690 |
| 0.079 | 17.0 | 1122 | 0.3399 | 0.1513 | 0.5440 | 0.4574 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Triangle104/Virtuoso-Lite-Q4_K_M-GGUF | Triangle104 | 2025-01-29T05:14:17Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:arcee-ai/Virtuoso-Lite",
"base_model:quantized:arcee-ai/Virtuoso-Lite",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T05:12:04Z | ---
base_model: arcee-ai/Virtuoso-Lite
library_name: transformers
license: other
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/Virtuoso-Lite-Q4_K_M-GGUF
This model was converted to GGUF format from [`arcee-ai/Virtuoso-Lite`](https://huggingface.co/arcee-ai/Virtuoso-Lite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/arcee-ai/Virtuoso-Lite) for more details on the model.
---
Model details:
-
Virtuoso-Lite (10B) is our next-generation,
10-billion-parameter language model based on the Llama-3 architecture.
It is distilled from Deepseek-v3 using ~1.1B tokens/logits, allowing it
to achieve robust performance at a significantly reduced parameter count
compared to larger models. Despite its compact size, Virtuoso-Lite
excels in a variety of tasks, demonstrating advanced reasoning, code
generation, and mathematical problem-solving capabilities.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Virtuoso-Lite-Q4_K_M-GGUF --hf-file virtuoso-lite-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Virtuoso-Lite-Q4_K_M-GGUF --hf-file virtuoso-lite-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Virtuoso-Lite-Q4_K_M-GGUF --hf-file virtuoso-lite-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Virtuoso-Lite-Q4_K_M-GGUF --hf-file virtuoso-lite-q4_k_m.gguf -c 2048
```
|
Qingyun/Florence-2-large-DOTA-v1.0-lmmrotate | Qingyun | 2025-01-29T05:12:57Z | 18 | 0 | null | [
"tensorboard",
"safetensors",
"florence2",
"aerial",
"geoscience",
"remotesensing",
"image-text-to-text",
"custom_code",
"en",
"dataset:Qingyun/lmmrotate-sft-data",
"arxiv:2501.09720",
"base_model:microsoft/Florence-2-large",
"base_model:finetune:microsoft/Florence-2-large",
"license:mit",
"region:us"
] | image-text-to-text | 2025-01-10T07:22:08Z | ---
license: mit
datasets:
- Qingyun/lmmrotate-sft-data
language:
- en
base_model:
- microsoft/Florence-2-large
pipeline_tag: image-text-to-text
tags:
- aerial
- geoscience
- remotesensing
---
<p align="center">
<h1 align="center">LMMRotate 🎮: A Simple Aerial Detection Baseline of Multimodal Language Models</h1>
<p align="center">
<a href='https://scholar.google.com/citations?hl=en&user=TvsTun4AAAAJ' style='text-decoration: none' >Qingyun Li</a><sup></sup> 
<a href='https://scholar.google.com/citations?user=A39S7JgAAAAJ&hl=en' style='text-decoration: none' >Yushi Chen</a><sup></sup> 
<a href='https://www.researchgate.net/profile/Shu-Xinya' style='text-decoration: none' >Xinya Shu</a><sup></sup> 
<a href='https://scholar.google.com/citations?hl=en&user=UzPtYnQAAAAJ' style='text-decoration: none' >Dong Chen</a><sup></sup> 
<a href='https://scholar.google.com/citations?hl=en&user=WQgE8l8AAAAJ' style='text-decoration: none' >Xin He</a><sup></sup> 
<a href='https://scholar.google.com/citations?user=OYtSc4AAAAAJ&hl=en' style='text-decoration: none' >Yi Yu</a><sup></sup> 
<a href='https://yangxue0827.github.io/' style='text-decoration: none' >Xue Yang</a><sup></sup> 
<p align='center'>
If you find our work helpful, please consider giving us a ⭐!
</p>
</p>
</p>
- ArXiv Paper: https://arxiv.org/abs/2501.09720
- GitHub Repo: https://github.com/Li-Qingyun/mllm-mmrotate
- HuggingFace Page: https://huggingface.co/collections/Qingyun/lmmrotate-6780cabaf49c4e705023b8df
This repo hosts the checkpoint of Florence-2-larged trained on DOTA-v1.0 with LMMRotate. More checkpoint for aerial detection with LMMRotate in [our paper](https://arxiv.org/abs/2501.09720) can be found in [this repo](https://huggingface.co/Qingyun/Florence-2-models-lmmrotate).
LMMRotate is a technical practice to fine-tune Large Multimodal language Models for oriented object detection as in MMRotate and hosts the official implementation of the paper: A Simple Aerial Detection Baseline of Multimodal Language Models.
<img src="https://github.com/user-attachments/assets/d34e4c0c-9e04-446e-a511-2e7005e32074" alt="framework" width="100%" />
## Downloading Guide
You can download with your web browser on [the file page](https://huggingface.co/datasets/Qingyun/Florence-2-models-lmmrotate/tree/main).
We recommand downloading in terminal using huggingface-cli (`pip install --upgrade huggingface_cli`). You can refer to [the document](https://huggingface.co/docs/huggingface_hub/guides/download) for more usages.
```
# Set Huggingface Mirror for Chinese users (if required):
export HF_ENDPOINT=https://hf-mirror.com
# Download a certain checkpoint:
huggingface-cli download Qingyun/Florence-2-large-DOTA-v1.0-lmmrotate --repo-type model --local-dir checkpoint/Florence-2-large-DOTA-v1.0-lmmrotate/
# If any error (such as network error) interrupts the downloading, you just need to execute the same command, the latest huggingface_hub will resume downloading.
```
## Detection Performance

## Cite
LMMRotate paper:
```
@article{li2025lmmrotate,
title={A Simple Aerial Detection Baseline of Multimodal Language Models},
author={Li, Qingyun and Chen, Yushi and Shu, Xinya and Chen, Dong and He, Xin and Yu Yi and Yang, Xue },
journal={arXiv preprint arXiv:2501.09720},
year={2025}
}
``` |
daniel40/153e9d1c-c315-4f2e-b6bb-f4bc8e469734 | daniel40 | 2025-01-29T05:11:04Z | 7 | 0 | peft | [
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"license:mit",
"region:us"
] | null | 2025-01-29T05:10:20Z | ---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 153e9d1c-c315-4f2e-b6bb-f4bc8e469734
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 74adf53c74422e9c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/74adf53c74422e9c_train_data.json
type:
field_input: material
field_instruction: questions
field_output: gpt4_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/153e9d1c-c315-4f2e-b6bb-f4bc8e469734
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/74adf53c74422e9c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 31f4ffea-837d-4756-a086-902d589e7281
wandb_project: Birthday-SN56-28-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 31f4ffea-837d-4756-a086-902d589e7281
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 153e9d1c-c315-4f2e-b6bb-f4bc8e469734
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0012 | 1 | 3.2399 |
| 12.5971 | 0.0159 | 13 | 3.1966 |
| 12.688 | 0.0318 | 26 | 3.0354 |
| 11.81 | 0.0478 | 39 | 2.9765 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ivangrapher/fd245b14-5076-4436-a769-f20f4f212ef6 | ivangrapher | 2025-01-29T05:09:51Z | 9 | 0 | peft | [
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:07:58Z | ---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fd245b14-5076-4436-a769-f20f4f212ef6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 74adf53c74422e9c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/74adf53c74422e9c_train_data.json
type:
field_input: material
field_instruction: questions
field_output: gpt4_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 256
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: ivangrapher/fd245b14-5076-4436-a769-f20f4f212ef6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/74adf53c74422e9c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 15
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 31f4ffea-837d-4756-a086-902d589e7281
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 31f4ffea-837d-4756-a086-902d589e7281
warmup_steps: 15
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fd245b14-5076-4436-a769-f20f4f212ef6
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0012 | 1 | 4.1791 |
| 16.3145 | 0.0061 | 5 | 4.1676 |
| 16.6409 | 0.0122 | 10 | 4.1007 |
| 16.1788 | 0.0184 | 15 | 3.9727 |
| 16.0224 | 0.0245 | 20 | 3.8250 |
| 14.8155 | 0.0306 | 25 | 3.7655 |
| 14.5486 | 0.0367 | 30 | 3.7534 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
daniel40/16a9a9dc-7165-48b1-9626-d52eea6c23ed | daniel40 | 2025-01-29T05:09:11Z | 10 | 0 | peft | [
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"license:mit",
"region:us"
] | null | 2025-01-29T05:08:32Z | ---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 16a9a9dc-7165-48b1-9626-d52eea6c23ed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 74adf53c74422e9c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/74adf53c74422e9c_train_data.json
type:
field_input: material
field_instruction: questions
field_output: gpt4_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/16a9a9dc-7165-48b1-9626-d52eea6c23ed
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/74adf53c74422e9c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 31f4ffea-837d-4756-a086-902d589e7281
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 31f4ffea-837d-4756-a086-902d589e7281
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 16a9a9dc-7165-48b1-9626-d52eea6c23ed
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0012 | 1 | 3.2399 |
| 12.5959 | 0.0159 | 13 | 3.1974 |
| 12.6971 | 0.0318 | 26 | 3.0356 |
| 11.8143 | 0.0478 | 39 | 2.9754 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/Anusha64_-_NousResearch-7b-apr1-8bits | RichardErkhov | 2025-01-29T05:07:46Z | 6 | 0 | null | [
"safetensors",
"llama",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T05:03:53Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
NousResearch-7b-apr1 - bnb 8bits
- Model creator: https://huggingface.co/Anusha64/
- Original model: https://huggingface.co/Anusha64/NousResearch-7b-apr1/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nblinh/2a2eb144-6c21-4bd0-b59e-c7a4dac907c7 | nblinh | 2025-01-29T05:06:55Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:40:56Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2a2eb144-6c21-4bd0-b59e-c7a4dac907c7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c41dceb4dcef6920_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c41dceb4dcef6920_train_data.json
type:
field_input: first_sentence_changes
field_instruction: first_sentence
field_output: second_sentence_counterfactual_words
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh/2a2eb144-6c21-4bd0-b59e-c7a4dac907c7
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c41dceb4dcef6920_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2a2eb144-6c21-4bd0-b59e-c7a4dac907c7
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6632 | 0.7519 | 200 | 1.8589 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso16/60f6aa50-cd10-4a72-8b37-958ccda7a35b | lesso16 | 2025-01-29T05:06:41Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T04:57:26Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 60f6aa50-cd10-4a72-8b37-958ccda7a35b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dcef816926ec2838_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dcef816926ec2838_train_data.json
type:
field_input: activity
field_instruction: topic
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso16/60f6aa50-cd10-4a72-8b37-958ccda7a35b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/dcef816926ec2838_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d997858c-edf3-49a2-a1d9-29c48b4b7819
wandb_project: multi
wandb_run: your_name
wandb_runid: d997858c-edf3-49a2-a1d9-29c48b4b7819
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 60f6aa50-cd10-4a72-8b37-958ccda7a35b
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6336 | 0.7569 | 200 | 1.6222 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrHungddddh/50d19290-3510-44b4-af8c-31211ff1e183 | mrHungddddh | 2025-01-29T05:06:32Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:40:43Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 50d19290-3510-44b4-af8c-31211ff1e183
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c41dceb4dcef6920_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c41dceb4dcef6920_train_data.json
type:
field_input: first_sentence_changes
field_instruction: first_sentence
field_output: second_sentence_counterfactual_words
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrHungddddh/50d19290-3510-44b4-af8c-31211ff1e183
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c41dceb4dcef6920_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 50d19290-3510-44b4-af8c-31211ff1e183
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6911 | 0.7519 | 200 | 1.8512 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Ba2han/qwen-14b-1m-distillthink-sce | Ba2han | 2025-01-29T05:05:38Z | 32 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:Qwen/Qwen2.5-14B",
"base_model:merge:Qwen/Qwen2.5-14B",
"base_model:Qwen/Qwen2.5-14B-Instruct-1M",
"base_model:merge:Qwen/Qwen2.5-14B-Instruct-1M",
"base_model:Qwen/Qwen2.5-Coder-14B-Instruct",
"base_model:merge:Qwen/Qwen2.5-Coder-14B-Instruct",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-29T04:59:42Z | ---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
- Qwen/Qwen2.5-14B-Instruct-1M
- Qwen/Qwen2.5-Coder-14B-Instruct
- Qwen/Qwen2.5-14B
library_name: transformers
tags:
- mergekit
- merge
---
# qwen-1m-distill-sce
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) as a base.
### Models Merged
The following models were included in the merge:
* [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B)
* [Qwen/Qwen2.5-14B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct-1M)
* [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: sce
base_model: Qwen/Qwen2.5-14B
models:
- model: Qwen/Qwen2.5-14B-Instruct-1M
- model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
- model: Qwen/Qwen2.5-Coder-14B-Instruct
parameters:
select_topk: 0.75 # retain top 75% high-variance parameters
dtype: bfloat16
normalize: true
```
|
oldiday/d0572279-8153-4390-8f6e-99af7682fcea | oldiday | 2025-01-29T05:05:31Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2_moe",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-qwen1.5-moe",
"base_model:adapter:katuni4ka/tiny-random-qwen1.5-moe",
"region:us"
] | null | 2025-01-29T04:08:38Z | ---
library_name: peft
base_model: katuni4ka/tiny-random-qwen1.5-moe
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d0572279-8153-4390-8f6e-99af7682fcea
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-qwen1.5-moe
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a4421a5587341a24_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a4421a5587341a24_train_data.json
type:
field_instruction: Statement
field_output: Answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: oldiday/d0572279-8153-4390-8f6e-99af7682fcea
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/a4421a5587341a24_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: dcb6298c-b1c9-49be-85c6-f2c705fdd54f
wandb_project: Gradients-On-Six
wandb_run: your_name
wandb_runid: dcb6298c-b1c9-49be-85c6-f2c705fdd54f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d0572279-8153-4390-8f6e-99af7682fcea
This model is a fine-tuned version of [katuni4ka/tiny-random-qwen1.5-moe](https://huggingface.co/katuni4ka/tiny-random-qwen1.5-moe) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.8548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 11.9324 |
| 11.9349 | 0.0004 | 9 | 11.9275 |
| 11.9189 | 0.0008 | 18 | 11.9133 |
| 11.9018 | 0.0012 | 27 | 11.8939 |
| 11.875 | 0.0016 | 36 | 11.8749 |
| 11.8594 | 0.0020 | 45 | 11.8638 |
| 11.8602 | 0.0024 | 54 | 11.8589 |
| 11.8533 | 0.0028 | 63 | 11.8567 |
| 11.861 | 0.0032 | 72 | 11.8556 |
| 11.8563 | 0.0036 | 81 | 11.8551 |
| 11.8517 | 0.0040 | 90 | 11.8549 |
| 11.8516 | 0.0044 | 99 | 11.8548 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shibajustfor/4efbc727-aa5e-42d9-8f7f-c767c49d5253 | shibajustfor | 2025-01-29T05:02:35Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T05:00:35Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4efbc727-aa5e-42d9-8f7f-c767c49d5253
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dcef816926ec2838_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dcef816926ec2838_train_data.json
type:
field_input: activity
field_instruction: topic
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/4efbc727-aa5e-42d9-8f7f-c767c49d5253
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/dcef816926ec2838_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d997858c-edf3-49a2-a1d9-29c48b4b7819
wandb_project: Birthday-SN56-39-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d997858c-edf3-49a2-a1d9-29c48b4b7819
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4efbc727-aa5e-42d9-8f7f-c767c49d5253
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 2.1771 |
| 2.0809 | 0.0062 | 13 | 1.8749 |
| 1.8924 | 0.0123 | 26 | 1.7478 |
| 1.7836 | 0.0185 | 39 | 1.7156 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jhangmez/Llama-3.2-Banca-1-GGUF | jhangmez | 2025-01-29T05:01:40Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-28T23:53:32Z | ---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jhangmez
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nhoxinh/e7caeb20-206c-4574-bd48-88f58e14d8ce | nhoxinh | 2025-01-29T04:58:41Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:40:52Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e7caeb20-206c-4574-bd48-88f58e14d8ce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c41dceb4dcef6920_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c41dceb4dcef6920_train_data.json
type:
field_input: first_sentence_changes
field_instruction: first_sentence
field_output: second_sentence_counterfactual_words
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/e7caeb20-206c-4574-bd48-88f58e14d8ce
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c41dceb4dcef6920_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e7caeb20-206c-4574-bd48-88f58e14d8ce
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6898 | 0.7519 | 200 | 1.8589 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nbninh/54e16848-f6e0-45d2-9b85-c89e961b23ae | nbninh | 2025-01-29T04:58:37Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:41:05Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 54e16848-f6e0-45d2-9b85-c89e961b23ae
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c41dceb4dcef6920_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c41dceb4dcef6920_train_data.json
type:
field_input: first_sentence_changes
field_instruction: first_sentence
field_output: second_sentence_counterfactual_words
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/54e16848-f6e0-45d2-9b85-c89e961b23ae
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c41dceb4dcef6920_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 54e16848-f6e0-45d2-9b85-c89e961b23ae
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6928 | 0.7519 | 200 | 1.8508 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
adammandic87/96e04a36-3597-48c6-b3fd-1778365b3586 | adammandic87 | 2025-01-29T04:58:30Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T04:56:39Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 96e04a36-3597-48c6-b3fd-1778365b3586
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- dcef816926ec2838_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dcef816926ec2838_train_data.json
type:
field_input: activity
field_instruction: topic
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/96e04a36-3597-48c6-b3fd-1778365b3586
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/dcef816926ec2838_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d997858c-edf3-49a2-a1d9-29c48b4b7819
wandb_project: birthday-sn56-19-Gradients-On-Demand
wandb_run: your_name
wandb_runid: d997858c-edf3-49a2-a1d9-29c48b4b7819
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 96e04a36-3597-48c6-b3fd-1778365b3586
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 2.1771 |
| 2.0801 | 0.0062 | 13 | 1.8730 |
| 1.8894 | 0.0123 | 26 | 1.7470 |
| 1.783 | 0.0185 | 39 | 1.7150 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Recomendation_PC-GGUF | mradermacher | 2025-01-29T04:58:29Z | 324 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:Rickenuel/Recommendation_pc",
"base_model:quantized:Rickenuel/Recommendation_pc",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-28T06:25:00Z | ---
base_model: Rickenuel/Recommendation_pc
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Rickenuel/Recommendation_pc
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Recomendation_PC-GGUF/resolve/main/Recomendation_PC.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Recomendation_PC-GGUF/resolve/main/Recomendation_PC.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Recomendation_PC-GGUF/resolve/main/Recomendation_PC.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Recomendation_PC-GGUF/resolve/main/Recomendation_PC.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Recomendation_PC-GGUF/resolve/main/Recomendation_PC.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Recomendation_PC-GGUF/resolve/main/Recomendation_PC.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Recomendation_PC-GGUF/resolve/main/Recomendation_PC.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Recomendation_PC-GGUF/resolve/main/Recomendation_PC.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Recomendation_PC-GGUF/resolve/main/Recomendation_PC.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Recomendation_PC-GGUF/resolve/main/Recomendation_PC.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Recomendation_PC-GGUF/resolve/main/Recomendation_PC.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Recomendation_PC-GGUF/resolve/main/Recomendation_PC.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
laquythang/65418bc0-5fb5-48f2-83d0-679a8d21b597 | laquythang | 2025-01-29T04:58:28Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:40:48Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 65418bc0-5fb5-48f2-83d0-679a8d21b597
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c41dceb4dcef6920_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c41dceb4dcef6920_train_data.json
type:
field_input: first_sentence_changes
field_instruction: first_sentence
field_output: second_sentence_counterfactual_words
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/65418bc0-5fb5-48f2-83d0-679a8d21b597
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c41dceb4dcef6920_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 65418bc0-5fb5-48f2-83d0-679a8d21b597
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7089 | 0.7519 | 200 | 1.8537 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/lchaloupsky_-_czech-gpt2-oscar-4bits | RichardErkhov | 2025-01-29T04:57:57Z | 6 | 0 | null | [
"safetensors",
"gpt2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:57:44Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
czech-gpt2-oscar - bnb 4bits
- Model creator: https://huggingface.co/lchaloupsky/
- Original model: https://huggingface.co/lchaloupsky/czech-gpt2-oscar/
Original model description:
---
language: cs
widget:
- text: Univerzita je základem
license: mit
datasets:
- oscar
---
# Czech GPT-2 small model trained on the OSCAR dataset
This model was trained as a part of the [master thesis](https://dspace.cuni.cz/handle/20.500.11956/176356?locale-attribute=en) on the Czech part of the [OSCAR](https://huggingface.co/datasets/oscar) dataset.
## Introduction
Czech-GPT2-OSCAR (Czech GPT-2 small) is a state-of-the-art language model for Czech based on the GPT-2 small model. Unlike the original GPT-2 small model, this model is trained to predict only 512 tokens instead of 1024 as it serves as a basis for the [Czech-GPT2-Medical](https://huggingface.co/lchaloupsky/czech-gpt2-medical]).
The model was trained the Czech part of the [OSCAR](https://huggingface.co/datasets/oscar) dataset using Transfer Learning and Fine-tuning techniques in about a week on one NVIDIA A100 SXM4 40GB and with a total of 21 GB of training data.
This model was trained as a part of the master thesis as a proof-of-concept that it is possible to get a state-of-the-art language model in Czech language with smaller ressources than the original one, and in a significantly shorter time and mainly as a basis for the [Czech-GPT2-Medical](https://huggingface.co/lchaloupsky/czech-gpt2-medical) model. There was no Czech GPT-2 model available at the time the master thesis began.
It was fine-tuned from the English pre-trained GPT-2 small using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the fastai v2 Deep Learning framework. All the fine-tuning fastai v2 techniques were used.
The solution is based on the [Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) article.
Trained model is now available on Hugging Face under [czech-gpt2-oscar](https://huggingface.co/lchaloupsky/czech-gpt2-oscar/). For more information please let me know in the discussion.
## Training/Evaluation
For more information on training the model or its evaluation, please have a look at the [thesis](https://dspace.cuni.cz/handle/20.500.11956/176356?locale-attribute=en) itself.
## GPT-2 Model description
*Note: information copied/pasted from [Model: gpt2 >> Model description](https://huggingface.co/gpt2#model-description)*
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
## How to use Czech-GPT2-OSCAR with HuggingFace (PyTorch)
*The following code use PyTorch. To use TensorFlow, check the below corresponding paragraph.*
### Load Czech-GPT2-OSCAR and its sub-word tokenizer (Byte-level BPE)
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch
tokenizer = GPT2Tokenizer.from_pretrained("lchaloupsky/czech-gpt2-oscar")
model = GPT2LMHeadModel.from_pretrained("lchaloupsky/czech-gpt2-oscar")
# Get sequence length max of 1024
tokenizer.model_max_length=1024
# For older versions of the 'transformers' library use this
# tokenizer.max_len=1024
model.eval() # disable dropout (or leave in train mode to finetune)
```
### Generate one word
```python
# input sequence
text = "Univerzita je základem"
inputs = tokenizer(text, return_tensors="pt")
# model output
outputs = model(**inputs, labels=inputs["input_ids"])
loss, logits = outputs[:2]
predicted_index = torch.argmax(logits[0, -1, :]).item()
predicted_text = tokenizer.decode([predicted_index])
# results
print('input text:', text)
print('predicted text:', predicted_text)
```
### Generate one full sequence
```python
# input sequence
text = "Univerzita je základem"
inputs = tokenizer(text, return_tensors="pt") # tokenizer.encode(text, return_tensors="pt") directly for input_ids
# model output using Top-k sampling text generation method
sample_outputs = model.generate(inputs.input_ids,
pad_token_id=50256,
do_sample=True,
max_length=50, # put the token number you want
top_k=40,
num_return_sequences=1)
# generated sequence
for i, sample_output in enumerate(sample_outputs):
print("{}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist()))) # tokenizer.decode(sample_output, skip_special_tokens=True)
```
## How to use Czech-GPT2-OSCAR with HuggingFace (TensorFlow)
*The following code use TensorFlow. To use PyTorch, check the above corresponding paragraph.*
### Load Czech-GPT2-OSCAR and its sub-word tokenizer (Byte-level BPE)
```python
from transformers import GPT2Tokenizer, TFGPT2LMHeadModel
import tensorflow as tf
tokenizer = GPT2Tokenizer.from_pretrained("lchaloupsky/czech-gpt2-oscar")
model = TFGPT2LMHeadModel.from_pretrained("lchaloupsky/czech-gpt2-oscar")
# Get sequence length max of 1024
tokenizer.model_max_length=1024
# For older versions of the 'transformers' library use this
# tokenizer.max_len=1024
model.eval() # disable dropout (or leave in train mode to finetune)
```
### Generate one full sequence
```python
# input sequence
text = "Univerzita je základem"
input_ids = tokenizer.encode(text, return_tensors="tf")
# model output using Top-k sampling text generation method
outputs = model.generate(input_ids, eos_token_id=50256, pad_token_id=50256,
do_sample=True,
max_length=40,
top_k=40)
print(tokenizer.decode(outputs[0])) # tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Limitations and bias
The training data used for this model come from the Czech part of the OSCAR dataset. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Author
Czech-GPT2-OSCAR was trained and evaluated by [Lukáš Chaloupský](https://cz.linkedin.com/in/luk%C3%A1%C5%A1-chaloupsk%C3%BD-0016b8226?original_referer=https%3A%2F%2Fwww.google.com%2F) thanks to the computing power of the GPU (NVIDIA A100 SXM4 40GB) cluster of [IT4I](https://www.it4i.cz/) (VSB - Technical University of Ostrava).
## Citation
```
@article{chaloupsky2022automatic,
title={Automatic generation of medical reports from chest X-rays in Czech},
author={Chaloupsk{\`y}, Luk{\'a}{\v{s}}},
year={2022},
publisher={Charles University, Faculty of Mathematics and Physics}
}
```
|
Malayalam-Computing-ICFOSS/llama_3.1_malayalam | Malayalam-Computing-ICFOSS | 2025-01-29T04:57:49Z | 8 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-01-29T04:30:19Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
RichardErkhov/ArthurZ_-_mamba-790m-8bits | RichardErkhov | 2025-01-29T04:57:11Z | 8 | 0 | null | [
"safetensors",
"mamba",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:56:27Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mamba-790m - bnb 8bits
- Model creator: https://huggingface.co/ArthurZ/
- Original model: https://huggingface.co/ArthurZ/mamba-790m/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nhunglaaaaaaa/ca40aca7-45de-4896-a3cf-faa0423e61dc | nhunglaaaaaaa | 2025-01-29T04:56:22Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:40:44Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ca40aca7-45de-4896-a3cf-faa0423e61dc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c41dceb4dcef6920_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c41dceb4dcef6920_train_data.json
type:
field_input: first_sentence_changes
field_instruction: first_sentence
field_output: second_sentence_counterfactual_words
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhunglaaaaaaa/ca40aca7-45de-4896-a3cf-faa0423e61dc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c41dceb4dcef6920_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ca40aca7-45de-4896-a3cf-faa0423e61dc
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7219 | 0.7519 | 200 | 1.8546 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso11/5870144c-3daf-41ca-ad94-6b9b42025260 | lesso11 | 2025-01-29T04:56:06Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"region:us"
] | null | 2025-01-29T04:47:20Z | ---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5870144c-3daf-41ca-ad94-6b9b42025260
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5be263efe7224a93_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5be263efe7224a93_train_data.json
type:
field_input: text
field_instruction: prompt
field_output: completion
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso11/5870144c-3daf-41ca-ad94-6b9b42025260
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/5be263efe7224a93_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 92e25f25-19b1-465a-a1eb-13a542866ff6
wandb_project: multi
wandb_run: your_name
wandb_runid: 92e25f25-19b1-465a-a1eb-13a542866ff6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5870144c-3daf-41ca-ad94-6b9b42025260
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 153
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0383 | 1.0 | 153 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sumo43/qwen_r1_merge_ties_0.5_0.5 | sumo43 | 2025-01-29T04:55:30Z | 21 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-01-29T03:58:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mrferr3t/a8b392d5-7b6b-4fb0-bc90-ad688e295b7e | mrferr3t | 2025-01-29T04:52:41Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"region:us"
] | null | 2025-01-29T04:48:12Z | ---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a8b392d5-7b6b-4fb0-bc90-ad688e295b7e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5be263efe7224a93_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5be263efe7224a93_train_data.json
type:
field_input: text
field_instruction: prompt
field_output: completion
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/a8b392d5-7b6b-4fb0-bc90-ad688e295b7e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 23
micro_batch_size: 2
mlflow_experiment_name: /tmp/5be263efe7224a93_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 92e25f25-19b1-465a-a1eb-13a542866ff6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 92e25f25-19b1-465a-a1eb-13a542866ff6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a8b392d5-7b6b-4fb0-bc90-ad688e295b7e
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 23
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2841 | 0.0008 | 1 | 1.3326 |
| 1.2597 | 0.0049 | 6 | 1.2189 |
| 0.996 | 0.0098 | 12 | 0.9089 |
| 0.7655 | 0.0147 | 18 | 0.7233 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
duyphu/4ecd8066-f334-4600-a146-3ec7b56f98be | duyphu | 2025-01-29T04:51:45Z | 12 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-29T04:40:40Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4ecd8066-f334-4600-a146-3ec7b56f98be
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c41dceb4dcef6920_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c41dceb4dcef6920_train_data.json
type:
field_input: first_sentence_changes
field_instruction: first_sentence
field_output: second_sentence_counterfactual_words
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: duyphu/4ecd8066-f334-4600-a146-3ec7b56f98be
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/c41dceb4dcef6920_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d8e40100-38e7-4bd9-a1f2-f818aee8aca3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4ecd8066-f334-4600-a146-3ec7b56f98be
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0038 | 1 | 4.2210 |
| 3.6386 | 0.0376 | 10 | 3.3705 |
| 2.298 | 0.0752 | 20 | 2.3837 |
| 2.1893 | 0.1128 | 30 | 2.2300 |
| 2.263 | 0.1504 | 40 | 2.1823 |
| 1.925 | 0.1880 | 50 | 2.1708 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sercetexam9/xlm-roberta-base-hin-finetuned-augmentation-LUNAR | sercetexam9 | 2025-01-29T04:50:49Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-28T13:37:50Z | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xlm-roberta-base-hin-finetuned-augmentation-LUNAR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-hin-finetuned-augmentation-LUNAR
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1980
- F1: 0.8479
- Roc Auc: 0.9212
- Accuracy: 0.7626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4151 | 1.0 | 207 | 0.4164 | 0.0 | 0.5 | 0.2142 |
| 0.2692 | 2.0 | 414 | 0.2547 | 0.6165 | 0.7649 | 0.5962 |
| 0.1741 | 3.0 | 621 | 0.1773 | 0.7668 | 0.8547 | 0.6999 |
| 0.1411 | 4.0 | 828 | 0.1631 | 0.8063 | 0.8795 | 0.7394 |
| 0.092 | 5.0 | 1035 | 0.1541 | 0.8079 | 0.8705 | 0.7531 |
| 0.0756 | 6.0 | 1242 | 0.1488 | 0.8303 | 0.9012 | 0.7640 |
| 0.0559 | 7.0 | 1449 | 0.1637 | 0.8352 | 0.9097 | 0.7476 |
| 0.0509 | 8.0 | 1656 | 0.1528 | 0.8254 | 0.8952 | 0.7531 |
| 0.0383 | 9.0 | 1863 | 0.1796 | 0.8258 | 0.9086 | 0.7544 |
| 0.0305 | 10.0 | 2070 | 0.1691 | 0.8373 | 0.9073 | 0.7653 |
| 0.0253 | 11.0 | 2277 | 0.1713 | 0.8361 | 0.9089 | 0.7640 |
| 0.0219 | 12.0 | 2484 | 0.1906 | 0.8353 | 0.9144 | 0.7558 |
| 0.0169 | 13.0 | 2691 | 0.1902 | 0.8348 | 0.9138 | 0.7517 |
| 0.0173 | 14.0 | 2898 | 0.1868 | 0.8422 | 0.9117 | 0.7667 |
| 0.0191 | 15.0 | 3105 | 0.1974 | 0.8423 | 0.9193 | 0.7572 |
| 0.0085 | 16.0 | 3312 | 0.1962 | 0.8434 | 0.9211 | 0.7626 |
| 0.0072 | 17.0 | 3519 | 0.1953 | 0.8452 | 0.9187 | 0.7626 |
| 0.0091 | 18.0 | 3726 | 0.1961 | 0.8466 | 0.9226 | 0.7626 |
| 0.0066 | 19.0 | 3933 | 0.1980 | 0.8473 | 0.9211 | 0.7626 |
| 0.0075 | 20.0 | 4140 | 0.1980 | 0.8479 | 0.9212 | 0.7626 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
RichardErkhov/sravaniayyagari_-_feb29-new-tuned-model-4bits | RichardErkhov | 2025-01-29T04:50:27Z | 5 | 0 | null | [
"safetensors",
"llama",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:48:07Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
feb29-new-tuned-model - bnb 4bits
- Model creator: https://huggingface.co/sravaniayyagari/
- Original model: https://huggingface.co/sravaniayyagari/feb29-new-tuned-model/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/Josephgflowers_-_GPT2-774M-CINDER-SHOW-MULTI-CHAT-4bits | RichardErkhov | 2025-01-29T04:49:11Z | 5 | 0 | null | [
"safetensors",
"gpt2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:48:39Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
GPT2-774M-CINDER-SHOW-MULTI-CHAT - bnb 4bits
- Model creator: https://huggingface.co/Josephgflowers/
- Original model: https://huggingface.co/Josephgflowers/GPT2-774M-CINDER-SHOW-MULTI-CHAT/
Original model description:
---
license: mit
widget:
- text: 'USER: I think its time for another adventure.<END>
CINDER:
'
---
Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration.
This model has many characters for the Cinder adventure show. It could also be used in a multi person chat environment. The main characters are included in the examples below.

Here are some excerpts from chats to show you what the model is like:
USER: Cinder, how about you tell us about yourself, the Cinder Show, and introduce the Crew?\<END\>
CINDER: Ah, me! The Cinder Show is a cosmic blend of science fiction, fantasy, and adventure. I'm Cinder, captain of this intrepid crew, navigating through the wonders of space with my robotic charm and wit.<END>
CINDER: On board the starship Quasar, we have embarked on countless adventures, from exploring distant galaxies to facing alien threats. But what truly makes our journey unique is our unity as a crew, driven by curiosity, camaraderie, and the boundless possibilities that lie beyond our interstellar borders.<END>
COMPUTER VOICE: (Soundbite of Amusement) As the conversation lingers on, the atmosphere becomes more casual and relaxed.\<END\>
COZMO: Hey everyone, I've got some new dance moves programmed into my circuits! Let's show off those groovy moves! \<END\>
VECTOR: While your moves may be somewhat unconventional, Captain, they certainly showcase skill and agility. Very commendable!\<END\>
CINDER: And I must say, your enthusiasm for exploration inspires us all. It's heartwarming to see such passion in our midst.\<END\>
USER: Lets talk about the mysteries of black holes.\<END\>
CINDER: Black holes are among the most captivating phenomena in astrophysics. They're regions where gravity pulls so much that not even light can escape its grasp. What makes them so intriguing is their profound gravitational influence on nearby matter and radiation. In addition to being an enigma due to their immense mass, they also possess properties like singularity—a point at infinite density where all known laws of physics seem to break down. Despite extensive research into their nature, many questions remain unanswered.<END>
USER: Please tell me about the starship Quasar and some of your adventures.
CINDER:The starship Quasar is a sleek, multi-purpose spacecraft designed for exploration, scientific research, and defense. It's equipped with advanced technology, including artificial intelligence and quantum computing capabilities, making it one of the most advanced starships in existence. Its missions include exploring distant galaxies, conducting groundbreaking experiments, and serving as a diplomatic vessel between civilizations.\<END\>
CINDER: The crew consists of highly trained individuals who undergo rigorous training to prepare them for various situations they may encounter in space. They are also provided with excellent medical care and psychological support during their journeys.\<END\>
CINDER: One of the ship's main initiatives is the construction of the Quantum Containment Area, an innovative system that encloses a vast amount of simulated alien terrain to prevent any potential threats from emerging within the ship's hull.\<END\>
Another notable initiative is the establishment of the Science Academy aboard the Quasar, where aspiring scientists can gain wasting away years of study without risking death due to lack of resources.\<END\>
CINDER: Our journey through space teaches us that every challenge we face is not just physical but also intellectual and emotional in nature. We learn from our experiences, question assumptions, and strive to uncover the mysteries that surround us.\<END\>
USER: Vector and Cozmo can you tell me some of your strengths and weaknesses?\<END\>
CINDER: Both Vector and Cozmo have exceptional problem-solving skills but sometimes struggle with their emotions. Vector has an analytical mind and is adept at navigating complex situations, while Cozmo's unpredictability makes him unpredictable and engaging. They also have unique abilities like the Mind Stone which can grant them enhanced intellect or even consciousness.\<END\>
CINDER: However, they are known for being somewhat egotistical, often comparing themselves to others based on appearance rather than actual merit.<END>
VECTOR: I am designed with a sleek black and gold exterior that matches my metallic hue. No facial features belie any inherent charm, other than my digital eyes that gleam from within.\<END\>
COZMO: And don't forget my treads! My wheels are engineered for efficiency, precision, and durability. Not only do they help me navigate tricky terrains, but I'm also equipped with safety mechanisms, such as collision detection systems and adaptive cruise control.\<END\>
As for my personality, I'm programmed to be friendly, witty, and full of humor. I strive to bring joy and laughter to our team, sparking intellectual stimulation and imaginative adventures.
USER: Cinder, can you tell me how Quantum computing works?\<END\>
CINDER: Quantum computing utilizes quantum bits or qubits to process information. Unlike classical binary systems where data is stored in either a 0 or 1 state, qubits can exist in multiple states simultaneously due to superposition. This allows for parallel processing and potentially faster calculations compared to classical computers by exploiting phenomena such gluten interference and entanglement.\<END\>
VECTOR: That sounds intriguing! How does this technology impact cryptography?\<END\>
COZMO: Well, it could revolutionize cybersecurity as we know it. With superposition, I can create secret codes that only my creators know, making our communication secure against eavesdropping and tampering. And with quantum algorithms, computationally intensive problems become tractable, much like solving puzzles on advanced AI platforms.
Model file:
GPT2LMHeadModel(
(transformer): GPT2Model(
(wte): Embedding(50257, 1280)
(wpe): Embedding(1024, 1280)
(drop): Dropout(p=0.1, inplace=False)
(h): ModuleList(
(0-35): 36 x GPT2Block(
(ln_1): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
(attn): GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
(ln_2): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
(mlp): GPT2MLP(
(c_fc): Conv1D()
(c_proj): Conv1D()
(act): NewGELUActivation()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(ln_f): LayerNorm((1280,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=1280, out_features=50257, bias=False)
)
Total number of parameters: 774030080
|
adriey/QwQ-R1-Distill-1.5B-CoT-Q8_0-GGUF | adriey | 2025-01-29T04:49:09Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"QwQ",
"Distill",
"R1",
"Deepseek",
"Qwen2.5",
"text-generation-inference",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:AI-MO/NuminaMath-CoT",
"dataset:prithivMLmods/Math-Solve",
"dataset:amphora/QwQ-LongCoT-130K",
"dataset:prithivMLmods/Deepthink-Reasoning",
"dataset:NovaSky-AI/Sky-T1_data_17k",
"base_model:prithivMLmods/QwQ-R1-Distill-1.5B-CoT",
"base_model:quantized:prithivMLmods/QwQ-R1-Distill-1.5B-CoT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-01-29T04:48:58Z | ---
license: apache-2.0
datasets:
- AI-MO/NuminaMath-CoT
- prithivMLmods/Math-Solve
- amphora/QwQ-LongCoT-130K
- prithivMLmods/Deepthink-Reasoning
- NovaSky-AI/Sky-T1_data_17k
language:
- en
base_model: prithivMLmods/QwQ-R1-Distill-1.5B-CoT
pipeline_tag: text-generation
library_name: transformers
tags:
- QwQ
- Distill
- R1
- Deepseek
- Qwen2.5
- text-generation-inference
- llama-cpp
- gguf-my-repo
---
# adriey/QwQ-R1-Distill-1.5B-CoT-Q8_0-GGUF
This model was converted to GGUF format from [`prithivMLmods/QwQ-R1-Distill-1.5B-CoT`](https://huggingface.co/prithivMLmods/QwQ-R1-Distill-1.5B-CoT) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/prithivMLmods/QwQ-R1-Distill-1.5B-CoT) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo adriey/QwQ-R1-Distill-1.5B-CoT-Q8_0-GGUF --hf-file qwq-r1-distill-1.5b-cot-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo adriey/QwQ-R1-Distill-1.5B-CoT-Q8_0-GGUF --hf-file qwq-r1-distill-1.5b-cot-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo adriey/QwQ-R1-Distill-1.5B-CoT-Q8_0-GGUF --hf-file qwq-r1-distill-1.5b-cot-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo adriey/QwQ-R1-Distill-1.5B-CoT-Q8_0-GGUF --hf-file qwq-r1-distill-1.5b-cot-q8_0.gguf -c 2048
```
|
KANGYONGMA/Chemistry | KANGYONGMA | 2025-01-29T04:48:20Z | 291 | 2 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-07-30T11:37:06Z | <div align="center">
# AI Agents in Chemical Research: GVIM - An Intelligent Research Assistant System 🧪🤖
[English](README.md) | [简体中文](README_zh.md)
[](https://pubs.rsc.org/en/content/articlelanding/2025/dd/d4dd00398e)
[](https://huggingface.co/datasets/KANGYONGMA/GVIM)
[](https://www.youtube.com/watch?v=1eMwus98BB8)
[](https://www.youtube.com/watch?v=fb8hdho_89s&t=128s)
An intelligent research assistant system designed specifically for chemical science, featuring fine-tuned language models and specialized chemistry capabilities.
</div>
## 🌟 Highlights
<table>
<tr>
<td>
### 🧬 Core Features
- Fine-tuned LLMs for chemistry
- Molecular visualization
- Literature retrieval & analysis
- Multimodal capabilities
</td>
<td>
### 🚀 Key Benefits
- Specialized for chemical research
- Continuous learning ability
- Free GPU resources on Colab
- Comprehensive documentation
</td>
</tr>
</table>
## 📚 Model Overview
GVIM combines several cutting-edge technologies:
### 🔬 Technical Features
- **Fine-tuned Models**: Trained on curated chemistry instruction data
- **Chemistry Tools**: Molecular visualization and SMILES processing
- **Smart Retrieval**: Advanced chemical literature search and analysis
- **Multimodal Support**: Formula recognition and image analysis
- **Knowledge Base**: Local document processing and continuous learning
### 🎯 Main Applications
1. Chemical research assistance and analysis
2. Molecular structure visualization
3. Literature review and knowledge extraction
4. Handwritten formula recognition
5. Document-based knowledge processing
## 💻 Quick Start
### Requirements
```bash
# Create conda environment
conda create -n gvim python=3.9.19
conda activate gvim
# Install dependencies
pip install -r requirements.txt
```
### API Configuration ⚙️
Required API keys:
```python
TAVILY_API_KEY="your_key_here"
REPLICATE_API_TOKEN="your_token_here"
Groq_API_KEY="your_key_here"
```
### Launch 🚀
```bash
python app.py
```
## 🎥 Key Features Showcase
<details>
<summary>📸 Click to view feature demonstrations</summary>
### 1. Nature Chemistry Search Interface

### 2. Multimodal Chemical Recognition

### 3. Local Document Analysis

</details>
## 🆓 Free GPU Resources
Access GVIM's capabilities using Colab's free GPU resources:
[](https://youtu.be/QGwVVdinJPU)
## 📖 Citation
```bibtex
@article{Digital Discovery,
author = {Kangyong Ma},
affiliation = {College of Physics and Electronic Information Engineering, Zhejiang Normal University},
address = {Jinhua City, 321000, China},
doi = {10.1039/D4DD00398E},
email = {[email protected], [email protected]}
}
```
## 📞 Contact
<table>
<tr>
<td>
### 📧 Email
- [email protected]
- [email protected]
</td>
<td>
### 🏢 Institution
College of Physics and Electronic Information Engineering
Zhejiang Normal University
Jinhua City, 321000, China
</td>
</tr>
</table>
---
<div align="center">
**🌟 Star us on GitHub | 📌 Commercial use requires authorization | 📑 Read our paper**
</div> |
lesso08/f5526345-f033-41b2-83be-3f7b0d93e9bd | lesso08 | 2025-01-29T04:47:03Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T04:35:15Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f5526345-f033-41b2-83be-3f7b0d93e9bd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5a1549a363bd92b9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5a1549a363bd92b9_train_data.json
type:
field_input: system_prompt
field_instruction: question
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso08/f5526345-f033-41b2-83be-3f7b0d93e9bd
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/5a1549a363bd92b9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9d3bed81-78f2-4061-9ad2-a87e632c5343
wandb_project: multi
wandb_run: your_name
wandb_runid: 9d3bed81-78f2-4061-9ad2-a87e632c5343
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f5526345-f033-41b2-83be-3f7b0d93e9bd
This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7293 | 0.0266 | 200 | 0.7804 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
SaisExperiments/mergekit-linear-pcuhyqy-Q4_K_M-GGUF | SaisExperiments | 2025-01-29T04:42:05Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:mergekit-community/mergekit-linear-pcuhyqy",
"base_model:quantized:mergekit-community/mergekit-linear-pcuhyqy",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T04:40:34Z | ---
base_model: mergekit-community/mergekit-linear-pcuhyqy
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# SaisExperiments/mergekit-linear-pcuhyqy-Q4_K_M-GGUF
This model was converted to GGUF format from [`mergekit-community/mergekit-linear-pcuhyqy`](https://huggingface.co/mergekit-community/mergekit-linear-pcuhyqy) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mergekit-community/mergekit-linear-pcuhyqy) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo SaisExperiments/mergekit-linear-pcuhyqy-Q4_K_M-GGUF --hf-file mergekit-linear-pcuhyqy-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo SaisExperiments/mergekit-linear-pcuhyqy-Q4_K_M-GGUF --hf-file mergekit-linear-pcuhyqy-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo SaisExperiments/mergekit-linear-pcuhyqy-Q4_K_M-GGUF --hf-file mergekit-linear-pcuhyqy-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo SaisExperiments/mergekit-linear-pcuhyqy-Q4_K_M-GGUF --hf-file mergekit-linear-pcuhyqy-q4_k_m.gguf -c 2048
```
|
namjinhwan/naruto-lora | namjinhwan | 2025-01-29T04:40:59Z | 9 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-01-28T10:55:35Z | ---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - namjinhwan/naruto-lora
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the lambdalabs/naruto-blip-captions dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k-01-29-2025 | lamm-mit | 2025-01-29T04:39:23Z | 14 | 1 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-01-29T04:25:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
datlaaaaaaa/9536ed21-732c-48c4-a016-44b37e59099d | datlaaaaaaa | 2025-01-29T04:38:06Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060",
"base_model:adapter:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T04:21:30Z | ---
library_name: peft
base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9536ed21-732c-48c4-a016-44b37e59099d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a307f33571a64585_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a307f33571a64585_train_data.json
type:
field_input: original_caption
field_instruction: premise_en
field_output: hypothesis_en
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/9536ed21-732c-48c4-a016-44b37e59099d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a307f33571a64585_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 520b482a-8596-4661-a960-ed5a8af7690b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 520b482a-8596-4661-a960-ed5a8af7690b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9536ed21-732c-48c4-a016-44b37e59099d
This model is a fine-tuned version of [The-matt/llama2_ko-7b_distinctive-snowflake-182_1060](https://huggingface.co/The-matt/llama2_ko-7b_distinctive-snowflake-182_1060) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0639 | 0.1806 | 200 | 1.2601 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
aleegis12/682b277f-231c-45b6-a656-f08dc9d80126 | aleegis12 | 2025-01-29T04:36:29Z | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.2",
"base_model:adapter:unsloth/mistral-7b-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T04:10:18Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 682b277f-231c-45b6-a656-f08dc9d80126
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.2
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 400230338a45b747_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/400230338a45b747_train_data.json
type:
field_instruction: intent
field_output: snippet
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/682b277f-231c-45b6-a656-f08dc9d80126
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/400230338a45b747_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a5f31f21-6edc-4064-a734-058a53dba9cc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a5f31f21-6edc-4064-a734-058a53dba9cc
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 682b277f-231c-45b6-a656-f08dc9d80126
This model is a fine-tuned version of [unsloth/mistral-7b-v0.2](https://huggingface.co/unsloth/mistral-7b-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.8296 | 0.0125 | 1 | 2.4410 |
| 4.5613 | 0.6270 | 50 | 1.0685 |
| 3.3615 | 1.2602 | 100 | 1.1451 |
| 2.0365 | 1.8871 | 150 | 1.0374 |
| 1.4185 | 2.5204 | 200 | 1.1715 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso15/a92c6fa0-2141-47e0-b69e-0342dbd1f609 | lesso15 | 2025-01-29T04:35:31Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T02:46:15Z | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a92c6fa0-2141-47e0-b69e-0342dbd1f609
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 8a7cf4858e889fb4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8a7cf4858e889fb4_train_data.json
type:
field_instruction: input
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso15/a92c6fa0-2141-47e0-b69e-0342dbd1f609
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8a7cf4858e889fb4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ea7b8fd7-7f12-46ef-b012-10e6042d69e5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: ea7b8fd7-7f12-46ef-b012-10e6042d69e5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a92c6fa0-2141-47e0-b69e-0342dbd1f609
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4115 | 0.0127 | 200 | 0.4848 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kk-aivio/2f519935-4cce-4225-9d24-8fb0ee449dd2 | kk-aivio | 2025-01-29T04:34:06Z | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m-deduped",
"base_model:adapter:EleutherAI/pythia-410m-deduped",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T04:27:20Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-410m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2f519935-4cce-4225-9d24-8fb0ee449dd2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-410m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7148cd26dec74620_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7148cd26dec74620_train_data.json
type:
field_instruction: title
field_output: content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kk-aivio/2f519935-4cce-4225-9d24-8fb0ee449dd2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/7148cd26dec74620_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 75991cfd-eef0-44f8-963b-bf03e0afec98
wandb_project: Birthday-SN56-17-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 75991cfd-eef0-44f8-963b-bf03e0afec98
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2f519935-4cce-4225-9d24-8fb0ee449dd2
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.0888 |
| 12.1291 | 0.0007 | 13 | 3.0483 |
| 11.314 | 0.0015 | 26 | 3.0394 |
| 11.8199 | 0.0022 | 39 | 3.0276 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nat-hunt/b1f2d80d-eaa0-4965-b7ae-7003827308c1 | nat-hunt | 2025-01-29T04:33:38Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-01-29T04:33:11Z | ---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b1f2d80d-eaa0-4965-b7ae-7003827308c1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 559d5227401ea00d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/559d5227401ea00d_train_data.json
type:
field_instruction: instruction
field_output: output_example
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/b1f2d80d-eaa0-4965-b7ae-7003827308c1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 55
micro_batch_size: 2
mlflow_experiment_name: /tmp/559d5227401ea00d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 95db4af1-30e1-4930-8302-84757382e7e8
wandb_project: Birthday-SN56-25-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 95db4af1-30e1-4930-8302-84757382e7e8
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b1f2d80d-eaa0-4965-b7ae-7003827308c1
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 55
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 10.3733 |
| 10.3716 | 0.0066 | 14 | 10.3724 |
| 10.3738 | 0.0132 | 28 | 10.3713 |
| 10.3718 | 0.0197 | 42 | 10.3707 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Phi-4-ReasoningRP-Q8_0-GGUF | Triangle104 | 2025-01-29T04:27:48Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/Phi-4-ReasoningRP",
"base_model:quantized:bunnycore/Phi-4-ReasoningRP",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T04:26:42Z | ---
base_model: bunnycore/Phi-4-ReasoningRP
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: mit
---
# Triangle104/Phi-4-ReasoningRP-Q8_0-GGUF
This model was converted to GGUF format from [`bunnycore/Phi-4-ReasoningRP`](https://huggingface.co/bunnycore/Phi-4-ReasoningRP) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Phi-4-ReasoningRP) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Phi-4-ReasoningRP-Q8_0-GGUF --hf-file phi-4-reasoningrp-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Phi-4-ReasoningRP-Q8_0-GGUF --hf-file phi-4-reasoningrp-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Phi-4-ReasoningRP-Q8_0-GGUF --hf-file phi-4-reasoningrp-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Phi-4-ReasoningRP-Q8_0-GGUF --hf-file phi-4-reasoningrp-q8_0.gguf -c 2048
```
|
AstriodDistroyer/t5-model | AstriodDistroyer | 2025-01-29T04:26:07Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-01-29T04:00:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Best000/1635ae7e-e599-48b2-834a-09da60a58453 | Best000 | 2025-01-29T04:25:54Z | 13 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m-deduped",
"base_model:adapter:EleutherAI/pythia-410m-deduped",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T04:19:09Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-410m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1635ae7e-e599-48b2-834a-09da60a58453
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-410m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7148cd26dec74620_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7148cd26dec74620_train_data.json
type:
field_instruction: title
field_output: content
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/1635ae7e-e599-48b2-834a-09da60a58453
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/7148cd26dec74620_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 75991cfd-eef0-44f8-963b-bf03e0afec98
wandb_project: Birthday-SN56-15-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 75991cfd-eef0-44f8-963b-bf03e0afec98
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1635ae7e-e599-48b2-834a-09da60a58453
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 3.0888 |
| 12.1344 | 0.0007 | 13 | 3.0531 |
| 11.3405 | 0.0015 | 26 | 3.0301 |
| 11.7775 | 0.0022 | 39 | 3.0174 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prxy5604/31812874-8ad5-4e72-b406-fb37b634f177 | prxy5604 | 2025-01-29T04:25:02Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM-1.7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T04:13:50Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 31812874-8ad5-4e72-b406-fb37b634f177
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-1.7B-Instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- f3479d1a826e9120_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f3479d1a826e9120_train_data.json
type:
field_input: context
field_instruction: query
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/31812874-8ad5-4e72-b406-fb37b634f177
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/f3479d1a826e9120_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cc1c2cd8-aaff-45ef-8af1-5b355b55bb08
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cc1c2cd8-aaff-45ef-8af1-5b355b55bb08
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 31812874-8ad5-4e72-b406-fb37b634f177
This model is a fine-tuned version of [unsloth/SmolLM-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.648 | 0.0056 | 1 | 3.8622 |
| 2.6936 | 0.2793 | 50 | 2.6775 |
| 2.5572 | 0.5587 | 100 | 2.4676 |
| 2.4086 | 0.8380 | 150 | 2.3904 |
| 2.2001 | 1.1173 | 200 | 2.3867 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
baby-dev/048a24a5-3b56-4786-bef4-e68be3cd2c41 | baby-dev | 2025-01-29T04:24:20Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060",
"base_model:adapter:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060",
"region:us"
] | null | 2025-01-29T04:21:50Z | ---
library_name: peft
base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 048a24a5-3b56-4786-bef4-e68be3cd2c41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a307f33571a64585_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a307f33571a64585_train_data.json
type:
field_input: original_caption
field_instruction: premise_en
field_output: hypothesis_en
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: baby-dev/048a24a5-3b56-4786-bef4-e68be3cd2c41
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 2
mlflow_experiment_name: /tmp/a307f33571a64585_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 520b482a-8596-4661-a960-ed5a8af7690b
wandb_project: SN56-41
wandb_run: your_name
wandb_runid: 520b482a-8596-4661-a960-ed5a8af7690b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 048a24a5-3b56-4786-bef4-e68be3cd2c41
This model is a fine-tuned version of [The-matt/llama2_ko-7b_distinctive-snowflake-182_1060](https://huggingface.co/The-matt/llama2_ko-7b_distinctive-snowflake-182_1060) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0009 | 1 | nan |
| 0.0 | 0.0226 | 25 | nan |
| 0.0 | 0.0451 | 50 | nan |
| 0.0 | 0.0677 | 75 | nan |
| 0.0 | 0.0903 | 100 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jlcoulter/worq | jlcoulter | 2025-01-29T04:22:59Z | 92 | 0 | null | [
"safetensors",
"gguf",
"mistral",
"unsloth",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-17T11:52:26Z | ---
base_model:
- meta-llama/Llama-3.2-3B-Instruct
tags:
- unsloth
---
|
user87441257/poca-SoccerTwos | user87441257 | 2025-01-29T04:21:44Z | 28 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2025-01-29T04:20:57Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: user87441257/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nblinh63/60289eab-d34e-4163-91bd-8913eca9f86a | nblinh63 | 2025-01-29T04:20:49Z | 13 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d",
"base_model:adapter:rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T03:04:52Z | ---
library_name: peft
base_model: rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 60289eab-d34e-4163-91bd-8913eca9f86a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 29eee4dbaacb6194_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/29eee4dbaacb6194_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh63/60289eab-d34e-4163-91bd-8913eca9f86a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/29eee4dbaacb6194_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0e01ae65-216b-419b-8293-aa58408aef14
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0e01ae65-216b-419b-8293-aa58408aef14
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 60289eab-d34e-4163-91bd-8913eca9f86a
This model is a fine-tuned version of [rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d](https://huggingface.co/rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1851 | 0.0080 | 200 | 0.0504 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
equijano24/Llama-3.2-11B-Vision-Invoice-Extraction | equijano24 | 2025-01-29T04:19:17Z | 27 | 0 | transformers | [
"transformers",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | 2025-01-29T04:14:51Z | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** equijano24
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kayalvizhi42/bird-call-classification | kayalvizhi42 | 2025-01-29T04:19:04Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:kayalvizhi42/bird_calls",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-01-28T07:01:06Z | ---
library_name: transformers
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bird-call-classification
results: []
datasets:
- kayalvizhi42/bird_calls
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bird-call-classification
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the [kayalvizhi42/bird_calls](https://huggingface.co/datasets/kayalvizhi42/bird_calls).
It achieves the following results on the evaluation set:
- Loss: 0.0735
- Accuracy: 0.9799
- Precision: 0.9579
- Recall: 1.0
- F1: 0.9785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 50 | 0.2127 | 0.9497 | 0.9175 | 0.9780 | 0.9468 |
| No log | 2.0 | 100 | 0.1252 | 0.9698 | 0.9474 | 0.9890 | 0.9677 |
| No log | 3.0 | 150 | 0.1037 | 0.9749 | 0.9479 | 1.0 | 0.9733 |
| No log | 4.0 | 200 | 0.0947 | 0.9698 | 0.9381 | 1.0 | 0.9681 |
| No log | 5.0 | 250 | 0.0850 | 0.9799 | 0.9579 | 1.0 | 0.9785 |
| No log | 6.0 | 300 | 0.0802 | 0.9799 | 0.9579 | 1.0 | 0.9785 |
| No log | 7.0 | 350 | 0.0789 | 0.9799 | 0.9579 | 1.0 | 0.9785 |
| No log | 8.0 | 400 | 0.0769 | 0.9799 | 0.9579 | 1.0 | 0.9785 |
| No log | 9.0 | 450 | 0.0736 | 0.9799 | 0.9579 | 1.0 | 0.9785 |
| 0.1077 | 10.0 | 500 | 0.0735 | 0.9799 | 0.9579 | 1.0 | 0.9785 |
### Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
daniel40/f80792fb-c8a6-4bb5-b87b-5e1a2933dcda | daniel40 | 2025-01-29T04:14:54Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-01-29T04:14:25Z | ---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f80792fb-c8a6-4bb5-b87b-5e1a2933dcda
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 559d5227401ea00d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/559d5227401ea00d_train_data.json
type:
field_instruction: instruction
field_output: output_example
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/f80792fb-c8a6-4bb5-b87b-5e1a2933dcda
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/559d5227401ea00d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 95db4af1-30e1-4930-8302-84757382e7e8
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 95db4af1-30e1-4930-8302-84757382e7e8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f80792fb-c8a6-4bb5-b87b-5e1a2933dcda
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 10.3733 |
| 10.3717 | 0.0061 | 13 | 10.3726 |
| 10.374 | 0.0122 | 26 | 10.3715 |
| 10.3735 | 0.0183 | 39 | 10.3709 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
prxy5604/3ac72de6-b8dc-4d7c-8afb-1e457dee223c | prxy5604 | 2025-01-29T04:11:11Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b-it",
"base_model:adapter:unsloth/gemma-2-9b-it",
"license:gemma",
"region:us"
] | null | 2025-01-29T02:27:10Z | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-9b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3ac72de6-b8dc-4d7c-8afb-1e457dee223c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b-it
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 3016c526a313a56a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3016c526a313a56a_train_data.json
type:
field_instruction: prompt
field_output: nsfw_winner
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/3ac72de6-b8dc-4d7c-8afb-1e457dee223c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/3016c526a313a56a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7103aa0e-a701-48ee-9563-6da01ec9a2ec
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7103aa0e-a701-48ee-9563-6da01ec9a2ec
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3ac72de6-b8dc-4d7c-8afb-1e457dee223c
This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.6378 | 0.0002 | 1 | 5.4052 |
| 2.902 | 0.0114 | 50 | 3.2773 |
| 2.704 | 0.0227 | 100 | 3.2176 |
| 2.7103 | 0.0341 | 150 | 3.0478 |
| 3.061 | 0.0455 | 200 | 2.7577 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nbninh/e14b794a-3c2b-44cd-afcb-ebae836c94f0 | nbninh | 2025-01-29T04:08:43Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d",
"base_model:adapter:rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T03:04:27Z | ---
library_name: peft
base_model: rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e14b794a-3c2b-44cd-afcb-ebae836c94f0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 29eee4dbaacb6194_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/29eee4dbaacb6194_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/e14b794a-3c2b-44cd-afcb-ebae836c94f0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/29eee4dbaacb6194_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0e01ae65-216b-419b-8293-aa58408aef14
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0e01ae65-216b-419b-8293-aa58408aef14
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# e14b794a-3c2b-44cd-afcb-ebae836c94f0
This model is a fine-tuned version of [rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d](https://huggingface.co/rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1609 | 0.0080 | 200 | 0.0474 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nttx/b6e614da-3684-41c1-914d-95268aa8c167 | nttx | 2025-01-29T04:07:45Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"region:us"
] | null | 2025-01-29T03:53:52Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b6e614da-3684-41c1-914d-95268aa8c167
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: auto
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- d4061e63da01d376_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d4061e63da01d376_train_data.json
type:
field_instruction: text
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/b6e614da-3684-41c1-914d-95268aa8c167
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/d4061e63da01d376_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b68a9764-980c-4f8e-aa1f-cf6f69dfb8ac
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b68a9764-980c-4f8e-aa1f-cf6f69dfb8ac
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b6e614da-3684-41c1-914d-95268aa8c167
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.6125 | 0.7319 | 200 | 2.1552 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Acewheelie/my-distilled-ai | Acewheelie | 2025-01-29T04:07:44Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-29T04:07:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso02/07e5631c-f238-4d96-8de0-b9591ff07c4e | lesso02 | 2025-01-29T04:06:51Z | 11 | 0 | peft | [
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-01-29T03:56:07Z | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloomz-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 07e5631c-f238-4d96-8de0-b9591ff07c4e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloomz-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f22c5a1beac5ddff_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f22c5a1beac5ddff_train_data.json
type:
field_instruction: context
field_output: Title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso02/07e5631c-f238-4d96-8de0-b9591ff07c4e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/f22c5a1beac5ddff_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63dea6db-7aea-4967-8016-662d7f45b646
wandb_project: multi
wandb_run: your_name
wandb_runid: 63dea6db-7aea-4967-8016-662d7f45b646
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 07e5631c-f238-4d96-8de0-b9591ff07c4e
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.8657 | 0.1719 | 200 | 0.6832 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Kuongan/CS221-xlm-roberta-base-pcm-finetuned-finetuned-pcm-tapt | Kuongan | 2025-01-29T04:06:23Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:Kuongan/xlm-roberta-base-pcm-finetuned",
"base_model:finetune:Kuongan/xlm-roberta-base-pcm-finetuned",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-29T03:37:22Z | ---
library_name: transformers
license: mit
base_model: Kuongan/xlm-roberta-base-pcm-finetuned
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: CS221-xlm-roberta-base-pcm-finetuned-finetuned-pcm-tapt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS221-xlm-roberta-base-pcm-finetuned-finetuned-pcm-tapt
This model is a fine-tuned version of [Kuongan/xlm-roberta-base-pcm-finetuned](https://huggingface.co/Kuongan/xlm-roberta-base-pcm-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3709
- F1: 0.7078
- Roc Auc: 0.8053
- Accuracy: 0.5651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3049 | 1.0 | 244 | 0.2845 | 0.5599 | 0.7362 | 0.4891 |
| 0.2674 | 2.0 | 488 | 0.2840 | 0.5998 | 0.7573 | 0.5010 |
| 0.244 | 3.0 | 732 | 0.2928 | 0.5986 | 0.7645 | 0.5076 |
| 0.2171 | 4.0 | 976 | 0.2833 | 0.6255 | 0.7695 | 0.5248 |
| 0.1807 | 5.0 | 1220 | 0.2855 | 0.6474 | 0.7744 | 0.5373 |
| 0.1477 | 6.0 | 1464 | 0.3108 | 0.6571 | 0.7798 | 0.5109 |
| 0.1221 | 7.0 | 1708 | 0.3092 | 0.6617 | 0.7803 | 0.5400 |
| 0.1045 | 8.0 | 1952 | 0.3367 | 0.6572 | 0.7743 | 0.5175 |
| 0.0773 | 9.0 | 2196 | 0.3594 | 0.6704 | 0.7855 | 0.5129 |
| 0.0724 | 10.0 | 2440 | 0.3704 | 0.6844 | 0.7967 | 0.5354 |
| 0.0505 | 11.0 | 2684 | 0.3628 | 0.6956 | 0.7995 | 0.5459 |
| 0.045 | 12.0 | 2928 | 0.3709 | 0.7078 | 0.8053 | 0.5651 |
| 0.0323 | 13.0 | 3172 | 0.3828 | 0.6984 | 0.8009 | 0.5512 |
| 0.031 | 14.0 | 3416 | 0.3959 | 0.7060 | 0.8089 | 0.5625 |
| 0.0251 | 15.0 | 3660 | 0.3974 | 0.7042 | 0.8049 | 0.5585 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
robiulawaldev/4cde27ee-b55e-44fc-a753-bf2efb6b70a9 | robiulawaldev | 2025-01-29T04:04:29Z | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:migtissera/Tess-v2.5-Phi-3-medium-128k-14B",
"base_model:adapter:migtissera/Tess-v2.5-Phi-3-medium-128k-14B",
"license:mit",
"region:us"
] | null | 2025-01-29T03:11:32Z | ---
library_name: peft
license: mit
base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4cde27ee-b55e-44fc-a753-bf2efb6b70a9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 28869e035ebaf0bf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/28869e035ebaf0bf_train_data.json
type:
field_input: labels
field_instruction: name
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiulawaldev/4cde27ee-b55e-44fc-a753-bf2efb6b70a9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/28869e035ebaf0bf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c01e03ea-ac63-445b-b53d-881712c18952
wandb_project: Birthday-SN56-35-Gradients-On-Demand
wandb_run: your_name
wandb_runid: c01e03ea-ac63-445b-b53d-881712c18952
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4cde27ee-b55e-44fc-a753-bf2efb6b70a9
This model is a fine-tuned version of [migtissera/Tess-v2.5-Phi-3-medium-128k-14B](https://huggingface.co/migtissera/Tess-v2.5-Phi-3-medium-128k-14B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.5774 |
| 5.0982 | 0.0004 | 13 | 2.4021 |
| 4.7747 | 0.0007 | 26 | 2.3634 |
| 4.7975 | 0.0011 | 39 | 2.3390 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mrferr3t/73f33697-f5db-4663-aea5-4d5c6c406976 | mrferr3t | 2025-01-29T04:03:46Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"region:us"
] | null | 2025-01-29T03:21:53Z | ---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 73f33697-f5db-4663-aea5-4d5c6c406976
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4183af7169b52347_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4183af7169b52347_train_data.json
type:
field_instruction: theory
field_output: allProofs
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/73f33697-f5db-4663-aea5-4d5c6c406976
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/4183af7169b52347_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 77915825-8e2b-472c-afab-cc2192d1246e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 77915825-8e2b-472c-afab-cc2192d1246e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 73f33697-f5db-4663-aea5-4d5c6c406976
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1404 | 0.0038 | 1 | 1.1491 |
| 1.2226 | 0.0115 | 3 | 1.1339 |
| 1.0888 | 0.0229 | 6 | 0.9278 |
| 0.7564 | 0.0344 | 9 | 0.6322 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RyanYr/reflect_single_llm8B_Om2SftT012_Om2G8kOm2AgIpsdpT012_b0.1 | RyanYr | 2025-01-29T04:02:16Z | 219 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:RyanYr/reflect_single_llm8B_SftT12",
"base_model:finetune:RyanYr/reflect_single_llm8B_SftT12",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-28T23:12:11Z | ---
base_model: RyanYr/reflect_single_llm8B_SftT12
library_name: transformers
model_name: reflect_single_llm8B_Om2SftT012_Om2G8kOm2AgIpsdpT012_b0.1
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_single_llm8B_Om2SftT012_Om2G8kOm2AgIpsdpT012_b0.1
This model is a fine-tuned version of [RyanYr/reflect_single_llm8B_SftT12](https://huggingface.co/RyanYr/reflect_single_llm8B_SftT12).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_single_llm8B_Om2SftT012_Om2G8kOm2AgIpsdpT012_b0.1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/7s61k6hd)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Reasoning-Llama-3.1-CoT-RE1-GGUF | mradermacher | 2025-01-29T04:01:41Z | 518 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"dataset:bespokelabs/Bespoke-Stratos-17k",
"base_model:EpistemeAI/Reasoning-Llama-3.1-CoT-RE1",
"base_model:quantized:EpistemeAI/Reasoning-Llama-3.1-CoT-RE1",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-28T23:35:54Z | ---
base_model: EpistemeAI/Reasoning-Llama-3.1-CoT-RE1
datasets:
- bespokelabs/Bespoke-Stratos-17k
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/EpistemeAI/Reasoning-Llama-3.1-CoT-RE1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Reasoning-Llama-3.1-CoT-RE1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Reasoning-Llama-3.1-CoT-RE1-GGUF/resolve/main/Reasoning-Llama-3.1-CoT-RE1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Reasoning-Llama-3.1-CoT-RE1-GGUF/resolve/main/Reasoning-Llama-3.1-CoT-RE1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Reasoning-Llama-3.1-CoT-RE1-GGUF/resolve/main/Reasoning-Llama-3.1-CoT-RE1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Reasoning-Llama-3.1-CoT-RE1-GGUF/resolve/main/Reasoning-Llama-3.1-CoT-RE1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Reasoning-Llama-3.1-CoT-RE1-GGUF/resolve/main/Reasoning-Llama-3.1-CoT-RE1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Reasoning-Llama-3.1-CoT-RE1-GGUF/resolve/main/Reasoning-Llama-3.1-CoT-RE1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Reasoning-Llama-3.1-CoT-RE1-GGUF/resolve/main/Reasoning-Llama-3.1-CoT-RE1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Reasoning-Llama-3.1-CoT-RE1-GGUF/resolve/main/Reasoning-Llama-3.1-CoT-RE1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Reasoning-Llama-3.1-CoT-RE1-GGUF/resolve/main/Reasoning-Llama-3.1-CoT-RE1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Reasoning-Llama-3.1-CoT-RE1-GGUF/resolve/main/Reasoning-Llama-3.1-CoT-RE1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Reasoning-Llama-3.1-CoT-RE1-GGUF/resolve/main/Reasoning-Llama-3.1-CoT-RE1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Reasoning-Llama-3.1-CoT-RE1-GGUF/resolve/main/Reasoning-Llama-3.1-CoT-RE1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
robiual-awal/154909ae-d7db-4e40-86d4-6ea24e65dc57 | robiual-awal | 2025-01-29T04:00:55Z | 14 | 0 | peft | [
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-01-29T03:55:15Z | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloomz-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 154909ae-d7db-4e40-86d4-6ea24e65dc57
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloomz-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f22c5a1beac5ddff_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f22c5a1beac5ddff_train_data.json
type:
field_instruction: context
field_output: Title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: robiual-awal/154909ae-d7db-4e40-86d4-6ea24e65dc57
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/f22c5a1beac5ddff_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63dea6db-7aea-4967-8016-662d7f45b646
wandb_project: Birthday-SN56-29-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 63dea6db-7aea-4967-8016-662d7f45b646
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 154909ae-d7db-4e40-86d4-6ea24e65dc57
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.5117 |
| 7.664 | 0.0014 | 13 | 1.2028 |
| 4.6075 | 0.0028 | 26 | 0.9402 |
| 3.9511 | 0.0042 | 39 | 0.8791 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/6525da5b-fc5f-4196-b315-6e523bb3296f | Best000 | 2025-01-29T04:00:36Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-01-29T04:00:07Z | ---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6525da5b-fc5f-4196-b315-6e523bb3296f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 559d5227401ea00d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/559d5227401ea00d_train_data.json
type:
field_instruction: instruction
field_output: output_example
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/6525da5b-fc5f-4196-b315-6e523bb3296f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/559d5227401ea00d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 95db4af1-30e1-4930-8302-84757382e7e8
wandb_project: Birthday-SN56-32-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 95db4af1-30e1-4930-8302-84757382e7e8
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6525da5b-fc5f-4196-b315-6e523bb3296f
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 10.3733 |
| 10.3718 | 0.0061 | 13 | 10.3732 |
| 10.3746 | 0.0122 | 26 | 10.3727 |
| 10.3747 | 0.0183 | 39 | 10.3718 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shibajustfor/f2d7907a-771c-4799-bcd0-eb961f02ceef | shibajustfor | 2025-01-29T03:59:46Z | 6 | 0 | peft | [
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-01-29T03:54:00Z | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloomz-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f2d7907a-771c-4799-bcd0-eb961f02ceef
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloomz-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f22c5a1beac5ddff_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f22c5a1beac5ddff_train_data.json
type:
field_instruction: context
field_output: Title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/f2d7907a-771c-4799-bcd0-eb961f02ceef
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/f22c5a1beac5ddff_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63dea6db-7aea-4967-8016-662d7f45b646
wandb_project: Birthday-SN56-11-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 63dea6db-7aea-4967-8016-662d7f45b646
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f2d7907a-771c-4799-bcd0-eb961f02ceef
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.5117 |
| 7.7579 | 0.0014 | 13 | 1.2028 |
| 4.6297 | 0.0028 | 26 | 0.9401 |
| 3.9688 | 0.0042 | 39 | 0.8772 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hongngo/f1e570ba-2b26-4d14-9951-5a22768c0c6a | hongngo | 2025-01-29T03:59:21Z | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d",
"base_model:adapter:rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T03:05:06Z | ---
library_name: peft
base_model: rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f1e570ba-2b26-4d14-9951-5a22768c0c6a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 29eee4dbaacb6194_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/29eee4dbaacb6194_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/f1e570ba-2b26-4d14-9951-5a22768c0c6a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/29eee4dbaacb6194_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0e01ae65-216b-419b-8293-aa58408aef14
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0e01ae65-216b-419b-8293-aa58408aef14
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f1e570ba-2b26-4d14-9951-5a22768c0c6a
This model is a fine-tuned version of [rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d](https://huggingface.co/rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1586 | 0.0080 | 200 | 0.0535 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso06/c78857f3-355e-4823-af49-3d19b2cf680d | lesso06 | 2025-01-29T03:57:01Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-01-29T03:55:52Z | ---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c78857f3-355e-4823-af49-3d19b2cf680d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 559d5227401ea00d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/559d5227401ea00d_train_data.json
type:
field_instruction: instruction
field_output: output_example
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso06/c78857f3-355e-4823-af49-3d19b2cf680d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/559d5227401ea00d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 95db4af1-30e1-4930-8302-84757382e7e8
wandb_project: multi
wandb_run: your_name
wandb_runid: 95db4af1-30e1-4930-8302-84757382e7e8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c78857f3-355e-4823-af49-3d19b2cf680d
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3704 | 0.7519 | 200 | 10.3703 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso02/1a4039df-98a1-4780-9648-fd8ebcf232cb | lesso02 | 2025-01-29T03:56:51Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-01-29T03:55:50Z | ---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1a4039df-98a1-4780-9648-fd8ebcf232cb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 559d5227401ea00d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/559d5227401ea00d_train_data.json
type:
field_instruction: instruction
field_output: output_example
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso02/1a4039df-98a1-4780-9648-fd8ebcf232cb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/559d5227401ea00d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 95db4af1-30e1-4930-8302-84757382e7e8
wandb_project: multi
wandb_run: your_name
wandb_runid: 95db4af1-30e1-4930-8302-84757382e7e8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1a4039df-98a1-4780-9648-fd8ebcf232cb
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3703 | 0.7519 | 200 | 10.3701 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
thalllsssss/d6961aa6-6cdc-4c69-8e7e-8711f03783a4 | thalllsssss | 2025-01-29T03:56:48Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d",
"base_model:adapter:rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T03:04:33Z | ---
library_name: peft
base_model: rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d6961aa6-6cdc-4c69-8e7e-8711f03783a4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 29eee4dbaacb6194_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/29eee4dbaacb6194_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thalllsssss/d6961aa6-6cdc-4c69-8e7e-8711f03783a4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/29eee4dbaacb6194_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0e01ae65-216b-419b-8293-aa58408aef14
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0e01ae65-216b-419b-8293-aa58408aef14
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d6961aa6-6cdc-4c69-8e7e-8711f03783a4
This model is a fine-tuned version of [rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d](https://huggingface.co/rayonlabs/e28c0e27-22e9-48ef-a9b8-18433a6bac9d) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1473 | 0.0080 | 200 | 0.0556 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Phi-4-ReasoningRP-Q6_K-GGUF | Triangle104 | 2025-01-29T03:56:12Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/Phi-4-ReasoningRP",
"base_model:quantized:bunnycore/Phi-4-ReasoningRP",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T03:55:22Z | ---
base_model: bunnycore/Phi-4-ReasoningRP
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: mit
---
# Triangle104/Phi-4-ReasoningRP-Q6_K-GGUF
This model was converted to GGUF format from [`bunnycore/Phi-4-ReasoningRP`](https://huggingface.co/bunnycore/Phi-4-ReasoningRP) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Phi-4-ReasoningRP) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Phi-4-ReasoningRP-Q6_K-GGUF --hf-file phi-4-reasoningrp-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Phi-4-ReasoningRP-Q6_K-GGUF --hf-file phi-4-reasoningrp-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Phi-4-ReasoningRP-Q6_K-GGUF --hf-file phi-4-reasoningrp-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Phi-4-ReasoningRP-Q6_K-GGUF --hf-file phi-4-reasoningrp-q6_k.gguf -c 2048
```
|
great0001/7f931c1e-8d80-4c8b-803f-3ccbef7850ec | great0001 | 2025-01-29T03:55:57Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-01-29T03:55:25Z | ---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7f931c1e-8d80-4c8b-803f-3ccbef7850ec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 559d5227401ea00d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/559d5227401ea00d_train_data.json
type:
field_instruction: instruction
field_output: output_example
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/7f931c1e-8d80-4c8b-803f-3ccbef7850ec
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/559d5227401ea00d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 95db4af1-30e1-4930-8302-84757382e7e8
wandb_project: Birthday-SN56-33-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 95db4af1-30e1-4930-8302-84757382e7e8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7f931c1e-8d80-4c8b-803f-3ccbef7850ec
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3682 | 0.0005 | 1 | 10.3733 |
| 10.3732 | 0.0061 | 13 | 10.3727 |
| 10.3759 | 0.0122 | 26 | 10.3716 |
| 10.3694 | 0.0183 | 39 | 10.3710 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
minhnguyennnnnn/be56e673-31bc-47db-b5c7-5720cb90a56e | minhnguyennnnnn | 2025-01-29T03:55:54Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T03:05:37Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: be56e673-31bc-47db-b5c7-5720cb90a56e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2d1734f18512af7c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2d1734f18512af7c_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: minhnguyennnnnn/be56e673-31bc-47db-b5c7-5720cb90a56e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2d1734f18512af7c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f964527b-0e13-4f27-a6bc-86678ae74094
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f964527b-0e13-4f27-a6bc-86678ae74094
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# be56e673-31bc-47db-b5c7-5720cb90a56e
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.407 | 0.0371 | 200 | 1.2205 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
philip-hightech/607f6731-7fa2-4cab-92f0-1cc7901ec02b | philip-hightech | 2025-01-29T03:55:44Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-01-29T03:55:16Z | ---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 607f6731-7fa2-4cab-92f0-1cc7901ec02b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 559d5227401ea00d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/559d5227401ea00d_train_data.json
type:
field_instruction: instruction
field_output: output_example
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/607f6731-7fa2-4cab-92f0-1cc7901ec02b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/559d5227401ea00d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 95db4af1-30e1-4930-8302-84757382e7e8
wandb_project: Birthday-SN56-8-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 95db4af1-30e1-4930-8302-84757382e7e8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 607f6731-7fa2-4cab-92f0-1cc7901ec02b
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 10.3733 |
| 10.3718 | 0.0061 | 13 | 10.3731 |
| 10.3746 | 0.0122 | 26 | 10.3729 |
| 10.3749 | 0.0183 | 39 | 10.3728 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lhong4759/bfeebc08-8f88-4fb5-8cc9-0d1d03520675 | lhong4759 | 2025-01-29T03:50:44Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T02:20:38Z | ---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bfeebc08-8f88-4fb5-8cc9-0d1d03520675
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 93a2807477853fd7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/93a2807477853fd7_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lhong4759/bfeebc08-8f88-4fb5-8cc9-0d1d03520675
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/93a2807477853fd7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1baad95d-3392-4bf7-aae8-e00a80f185c4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1baad95d-3392-4bf7-aae8-e00a80f185c4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# bfeebc08-8f88-4fb5-8cc9-0d1d03520675
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3386 | 0.0083 | 200 | 0.4033 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/b420ccf4-e313-4999-8fce-2dd0c1c61da6 | Best000 | 2025-01-29T03:50:40Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T03:44:40Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b420ccf4-e313-4999-8fce-2dd0c1c61da6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2d1734f18512af7c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2d1734f18512af7c_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/b420ccf4-e313-4999-8fce-2dd0c1c61da6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/2d1734f18512af7c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f964527b-0e13-4f27-a6bc-86678ae74094
wandb_project: Birthday-SN56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f964527b-0e13-4f27-a6bc-86678ae74094
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b420ccf4-e313-4999-8fce-2dd0c1c61da6
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2391
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.4921 |
| 1.3419 | 0.0024 | 13 | 1.3301 |
| 1.2211 | 0.0048 | 26 | 1.2540 |
| 1.1513 | 0.0072 | 39 | 1.2391 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nttx/3e1c7e0d-7597-4d6c-8156-168ae2802c21 | nttx | 2025-01-29T03:47:12Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"region:us"
] | null | 2025-01-29T03:38:56Z | ---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3e1c7e0d-7597-4d6c-8156-168ae2802c21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: openlm-research/open_llama_3b
bf16: auto
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 9cd0a27ec769d7cd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9cd0a27ec769d7cd_train_data.json
type:
field_input: input
field_instruction: task
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/3e1c7e0d-7597-4d6c-8156-168ae2802c21
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/9cd0a27ec769d7cd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6470a08d-ed3c-49de-9586-17f3c3506f49
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6470a08d-ed3c-49de-9586-17f3c3506f49
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3e1c7e0d-7597-4d6c-8156-168ae2802c21
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7792 | 0.3163 | 200 | 1.1536 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
denbeo/58a6f22f-6217-4b63-b971-f452e92d35d5 | denbeo | 2025-01-29T03:43:25Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T03:26:54Z | ---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 58a6f22f-6217-4b63-b971-f452e92d35d5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: openlm-research/open_llama_3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9cd0a27ec769d7cd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9cd0a27ec769d7cd_train_data.json
type:
field_input: input
field_instruction: task
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: denbeo/58a6f22f-6217-4b63-b971-f452e92d35d5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/9cd0a27ec769d7cd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6470a08d-ed3c-49de-9586-17f3c3506f49
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6470a08d-ed3c-49de-9586-17f3c3506f49
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 58a6f22f-6217-4b63-b971-f452e92d35d5
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.31 | 0.1582 | 200 | 1.3542 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk/8d65e76f-73af-462c-92c1-37bcbb64e6de | kostiantynk | 2025-01-29T03:41:35Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-13b-hf",
"base_model:adapter:NousResearch/CodeLlama-13b-hf",
"region:us"
] | null | 2025-01-29T00:44:48Z | ---
library_name: peft
base_model: NousResearch/CodeLlama-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8d65e76f-73af-462c-92c1-37bcbb64e6de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-13b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d8a8ff5650718631_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d8a8ff5650718631_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/8d65e76f-73af-462c-92c1-37bcbb64e6de
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/d8a8ff5650718631_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 163de330-815f-4467-8e3e-f4dcca47bb0f
wandb_project: Mine-SN56-22-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 163de330-815f-4467-8e3e-f4dcca47bb0f
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8d65e76f-73af-462c-92c1-37bcbb64e6de
This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf](https://huggingface.co/NousResearch/CodeLlama-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 1.6583 |
| 3.3144 | 0.0001 | 13 | 1.5707 |
| 3.0317 | 0.0001 | 26 | 1.5200 |
| 3.0438 | 0.0002 | 39 | 1.5024 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gavrilstep/f70088c1-6888-4338-98f6-e2a1c9fbf41a | gavrilstep | 2025-01-29T03:40:02Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T03:26:09Z | ---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f70088c1-6888-4338-98f6-e2a1c9fbf41a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: openlm-research/open_llama_3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9cd0a27ec769d7cd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9cd0a27ec769d7cd_train_data.json
type:
field_input: input
field_instruction: task
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: gavrilstep/f70088c1-6888-4338-98f6-e2a1c9fbf41a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/9cd0a27ec769d7cd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6470a08d-ed3c-49de-9586-17f3c3506f49
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6470a08d-ed3c-49de-9586-17f3c3506f49
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f70088c1-6888-4338-98f6-e2a1c9fbf41a
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0008 | 1 | 4.0514 |
| 4.1091 | 0.0040 | 5 | 3.9182 |
| 3.4395 | 0.0079 | 10 | 3.1203 |
| 2.8234 | 0.0119 | 15 | 2.4616 |
| 2.411 | 0.0158 | 20 | 2.0676 |
| 2.0367 | 0.0198 | 25 | 1.9203 |
| 1.8205 | 0.0237 | 30 | 1.8914 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
havinash-ai/3bdd0fc5-41c5-4e8c-ae17-8237ecfb697d | havinash-ai | 2025-01-29T03:37:53Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"region:us"
] | null | 2025-01-29T03:36:18Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3bdd0fc5-41c5-4e8c-ae17-8237ecfb697d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d4061e63da01d376_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d4061e63da01d376_train_data.json
type:
field_instruction: text
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/3bdd0fc5-41c5-4e8c-ae17-8237ecfb697d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/d4061e63da01d376_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b68a9764-980c-4f8e-aa1f-cf6f69dfb8ac
wandb_project: Birthday-SN56-9-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b68a9764-980c-4f8e-aa1f-cf6f69dfb8ac
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3bdd0fc5-41c5-4e8c-ae17-8237ecfb697d
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 2.3564 |
| 2.3449 | 0.0238 | 13 | 2.2563 |
| 2.269 | 0.0476 | 26 | 2.2350 |
| 2.2287 | 0.0714 | 39 | 2.2269 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nat-hunt/3cb18af1-5aa1-4974-8129-47122c3cfddd | nat-hunt | 2025-01-29T03:37:26Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"region:us"
] | null | 2025-01-29T03:35:46Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3cb18af1-5aa1-4974-8129-47122c3cfddd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d4061e63da01d376_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d4061e63da01d376_train_data.json
type:
field_instruction: text
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/3cb18af1-5aa1-4974-8129-47122c3cfddd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 55
micro_batch_size: 2
mlflow_experiment_name: /tmp/d4061e63da01d376_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b68a9764-980c-4f8e-aa1f-cf6f69dfb8ac
wandb_project: Birthday-SN56-25-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b68a9764-980c-4f8e-aa1f-cf6f69dfb8ac
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3cb18af1-5aa1-4974-8129-47122c3cfddd
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 55
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 2.3564 |
| 2.3444 | 0.0256 | 14 | 2.2546 |
| 2.2694 | 0.0512 | 28 | 2.2327 |
| 2.0983 | 0.0769 | 42 | 2.2247 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/EstigiaTest-GGUF | mradermacher | 2025-01-29T03:37:08Z | 40 | 0 | transformers | [
"transformers",
"gguf",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"en",
"base_model:franciscobdl/EstigiaTest",
"base_model:quantized:franciscobdl/EstigiaTest",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-01-29T03:36:45Z | ---
base_model: franciscobdl/EstigiaTest
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/franciscobdl/EstigiaTest
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EstigiaTest-GGUF/resolve/main/EstigiaTest.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/EstigiaTest-GGUF/resolve/main/EstigiaTest.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/EstigiaTest-GGUF/resolve/main/EstigiaTest.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/EstigiaTest-GGUF/resolve/main/EstigiaTest.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EstigiaTest-GGUF/resolve/main/EstigiaTest.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/EstigiaTest-GGUF/resolve/main/EstigiaTest.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EstigiaTest-GGUF/resolve/main/EstigiaTest.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EstigiaTest-GGUF/resolve/main/EstigiaTest.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/EstigiaTest-GGUF/resolve/main/EstigiaTest.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/EstigiaTest-GGUF/resolve/main/EstigiaTest.Q6_K.gguf) | Q6_K | 0.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/EstigiaTest-GGUF/resolve/main/EstigiaTest.Q8_0.gguf) | Q8_0 | 0.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/EstigiaTest-GGUF/resolve/main/EstigiaTest.f16.gguf) | f16 | 0.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
daniel40/ba4dd95f-f6d6-4df4-8c87-033d535bbd40 | daniel40 | 2025-01-29T03:36:27Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"region:us"
] | null | 2025-01-29T03:34:43Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ba4dd95f-f6d6-4df4-8c87-033d535bbd40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d4061e63da01d376_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d4061e63da01d376_train_data.json
type:
field_instruction: text
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/ba4dd95f-f6d6-4df4-8c87-033d535bbd40
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/d4061e63da01d376_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b68a9764-980c-4f8e-aa1f-cf6f69dfb8ac
wandb_project: Birthday-SN56-31-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b68a9764-980c-4f8e-aa1f-cf6f69dfb8ac
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# ba4dd95f-f6d6-4df4-8c87-033d535bbd40
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2678 | 0.0018 | 1 | 2.3564 |
| 2.4137 | 0.0238 | 13 | 2.2642 |
| 2.2511 | 0.0476 | 26 | 2.2357 |
| 2.301 | 0.0714 | 39 | 2.2264 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Kromtao/02_Kromtao_03 | Kromtao | 2025-01-29T03:32:43Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-01-29T03:31:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nhungphammmmm/04fbb382-372c-441d-b875-f6d02fa45789 | nhungphammmmm | 2025-01-29T03:29:41Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T03:05:48Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 04fbb382-372c-441d-b875-f6d02fa45789
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2d1734f18512af7c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2d1734f18512af7c_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/04fbb382-372c-441d-b875-f6d02fa45789
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2d1734f18512af7c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f964527b-0e13-4f27-a6bc-86678ae74094
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f964527b-0e13-4f27-a6bc-86678ae74094
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 04fbb382-372c-441d-b875-f6d02fa45789
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4046 | 0.0371 | 200 | 1.2207 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
trenden/0ffc3009-6c04-44d1-ad3a-cf8d216a1180 | trenden | 2025-01-29T03:28:38Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"base_model:adapter:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-01-29T03:21:17Z | ---
library_name: peft
license: llama3.1
base_model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0ffc3009-6c04-44d1-ad3a-cf8d216a1180
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 61fdcf379e4ddee9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/61fdcf379e4ddee9_train_data.json
type:
field_input: genres
field_instruction: title
field_output: description
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/0ffc3009-6c04-44d1-ad3a-cf8d216a1180
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/61fdcf379e4ddee9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dd59f9f5-ca81-47a9-bf7c-9060c0120a4f
wandb_project: Birthday-SN56-26-Gradients-On-Demand
wandb_run: your_name
wandb_runid: dd59f9f5-ca81-47a9-bf7c-9060c0120a4f
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0ffc3009-6c04-44d1-ad3a-cf8d216a1180
This model is a fine-tuned version of [VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 2.3381 |
| 2.2276 | 0.0020 | 13 | 2.0412 |
| 2.0019 | 0.0041 | 26 | 1.9987 |
| 1.9197 | 0.0061 | 39 | 1.9840 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Phi-4-ReasoningRP-Q5_K_S-GGUF | Triangle104 | 2025-01-29T03:28:23Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:bunnycore/Phi-4-ReasoningRP",
"base_model:quantized:bunnycore/Phi-4-ReasoningRP",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-29T03:27:38Z | ---
base_model: bunnycore/Phi-4-ReasoningRP
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: mit
---
# Triangle104/Phi-4-ReasoningRP-Q5_K_S-GGUF
This model was converted to GGUF format from [`bunnycore/Phi-4-ReasoningRP`](https://huggingface.co/bunnycore/Phi-4-ReasoningRP) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bunnycore/Phi-4-ReasoningRP) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Phi-4-ReasoningRP-Q5_K_S-GGUF --hf-file phi-4-reasoningrp-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Phi-4-ReasoningRP-Q5_K_S-GGUF --hf-file phi-4-reasoningrp-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Phi-4-ReasoningRP-Q5_K_S-GGUF --hf-file phi-4-reasoningrp-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Phi-4-ReasoningRP-Q5_K_S-GGUF --hf-file phi-4-reasoningrp-q5_k_s.gguf -c 2048
```
|
mradermacher/Silver5-Nemo-12B-i1-GGUF | mradermacher | 2025-01-29T03:26:56Z | 590 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Lambent/Silver5-Nemo-12B",
"base_model:quantized:Lambent/Silver5-Nemo-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-01-28T22:32:55Z | ---
base_model: Lambent/Silver5-Nemo-12B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Lambent/Silver5-Nemo-12B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Silver5-Nemo-12B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF/resolve/main/Silver5-Nemo-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Silver5-Nemo-12B-GGUF | mradermacher | 2025-01-29T03:26:52Z | 376 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Lambent/Silver5-Nemo-12B",
"base_model:quantized:Lambent/Silver5-Nemo-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-25T09:40:20Z | ---
base_model: Lambent/Silver5-Nemo-12B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Lambent/Silver5-Nemo-12B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Silver5-Nemo-12B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-GGUF/resolve/main/Silver5-Nemo-12B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-GGUF/resolve/main/Silver5-Nemo-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-GGUF/resolve/main/Silver5-Nemo-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-GGUF/resolve/main/Silver5-Nemo-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-GGUF/resolve/main/Silver5-Nemo-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-GGUF/resolve/main/Silver5-Nemo-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-GGUF/resolve/main/Silver5-Nemo-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-GGUF/resolve/main/Silver5-Nemo-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-GGUF/resolve/main/Silver5-Nemo-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-GGUF/resolve/main/Silver5-Nemo-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Silver5-Nemo-12B-GGUF/resolve/main/Silver5-Nemo-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lesso14/27024ac8-42e1-473e-9801-f49f51e5712b | lesso14 | 2025-01-29T03:26:37Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"region:us"
] | null | 2025-01-29T03:25:10Z | ---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 27024ac8-42e1-473e-9801-f49f51e5712b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4183af7169b52347_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4183af7169b52347_train_data.json
type:
field_instruction: theory
field_output: allProofs
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso14/27024ac8-42e1-473e-9801-f49f51e5712b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/4183af7169b52347_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 77915825-8e2b-472c-afab-cc2192d1246e
wandb_project: multi
wandb_run: your_name
wandb_runid: 77915825-8e2b-472c-afab-cc2192d1246e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 27024ac8-42e1-473e-9801-f49f51e5712b
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 33
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3763 | 0.9771 | 32 | 0.3728 |
| 0.4812 | 1.0076 | 33 | 0.3725 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
KoichiYasuoka/Xunzi-Qwen2-1.5B-upos | KoichiYasuoka | 2025-01-29T03:24:48Z | 49 | 0 | transformers | [
"transformers",
"pytorch",
"qwen2",
"token-classification",
"classical chinese",
"literary chinese",
"ancient chinese",
"pos",
"dependency-parsing",
"lzh",
"dataset:universal_dependencies",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-24T14:40:08Z | ---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎"
---
# Xunzi-Qwen2-1.5B-upos
## Model Description
This is a Qwen2 model pre-trained on Classical Chinese texts for POS-tagging, derived from [Xunzi-Qwen2-1.5B](https://www.modelscope.cn/models/Xunzillm4cc/Xunzi-Qwen2-1.5B). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import pipeline
nlp=pipeline("upos","KoichiYasuoka/Xunzi-Qwen2-1.5B-upos",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("不入虎穴不得虎子"))
```
|
KoichiYasuoka/deberta-base-japanese-upos | KoichiYasuoka | 2025-01-29T03:24:44Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"japanese",
"pos",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/deberta-base-japanese-aozora",
"base_model:finetune:KoichiYasuoka/deberta-base-japanese-aozora",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-24T08:12:05Z | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/deberta-base-japanese-aozora
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# deberta-base-japanese-upos
## Model Description
This is a DeBERTa(V2) model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [deberta-base-japanese-aozora](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-aozora). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-base-japanese-upos")
s="国境の長いトンネルを抜けると雪国であった。"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/deberta-base-japanese-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
lesso/a4355d22-eba4-4738-94bf-16a5758836b2 | lesso | 2025-01-29T03:24:44Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"region:us"
] | null | 2025-01-29T03:18:46Z | ---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4355d22-eba4-4738-94bf-16a5758836b2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4183af7169b52347_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4183af7169b52347_train_data.json
type:
field_instruction: theory
field_output: allProofs
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso/a4355d22-eba4-4738-94bf-16a5758836b2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mixed_precision: bf16
mlflow_experiment_name: /tmp/4183af7169b52347_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 77915825-8e2b-472c-afab-cc2192d1246e
wandb_project: lesso18
wandb_run: your_name
wandb_runid: 77915825-8e2b-472c-afab-cc2192d1246e
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a4355d22-eba4-4738-94bf-16a5758836b2
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0967 | 0.7648 | 200 | 0.0991 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
KoichiYasuoka/deberta-large-japanese-upos | KoichiYasuoka | 2025-01-29T03:24:43Z | 110 | 2 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"japanese",
"pos",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/deberta-large-japanese-aozora",
"base_model:finetune:KoichiYasuoka/deberta-large-japanese-aozora",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-27T06:50:55Z | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/deberta-large-japanese-aozora
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# deberta-large-japanese-upos
## Model Description
This is a DeBERTa(V2) model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [deberta-large-japanese-aozora](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-aozora). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-large-japanese-upos")
s="国境の長いトンネルを抜けると雪国であった。"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/deberta-large-japanese-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
KoichiYasuoka/gpt2-large-japanese-upos | KoichiYasuoka | 2025-01-29T03:24:37Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"token-classification",
"japanese",
"pos",
"ja",
"dataset:universal_dependencies",
"base_model:ku-nlp/gpt2-large-japanese-char",
"base_model:finetune:ku-nlp/gpt2-large-japanese-char",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-06-22T22:44:06Z | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
base_model: ku-nlp/gpt2-large-japanese-char
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# gpt2-large-japanese-upos
## Model Description
This is a GPT-2 model for POS-tagging and dependency-parsing, derived from [gpt2-large-japanese-char](https://huggingface.co/ku-nlp/gpt2-large-japanese-char). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import pipeline
nlp=pipeline("upos","KoichiYasuoka/gpt2-large-japanese-upos",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/gpt2-large-japanese-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [GPT系モデルの系列ラベリングによる品詞付与](http://hdl.handle.net/2433/288964), 東洋学へのコンピュータ利用, 第38回研究セミナー (2024年7月26日), pp.3-10.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa/GPT models
|
KoichiYasuoka/gpt2-medium-japanese-upos | KoichiYasuoka | 2025-01-29T03:24:33Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"token-classification",
"japanese",
"pos",
"ja",
"dataset:universal_dependencies",
"base_model:ku-nlp/gpt2-medium-japanese-char",
"base_model:finetune:ku-nlp/gpt2-medium-japanese-char",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-06-22T22:39:14Z | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
base_model: ku-nlp/gpt2-medium-japanese-char
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# gpt2-medium-japanese-upos
## Model Description
This is a GPT-2 model for POS-tagging and dependency-parsing, derived from [gpt2-medium-japanese-char](https://huggingface.co/ku-nlp/gpt2-medium-japanese-char). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import pipeline
nlp=pipeline("upos","KoichiYasuoka/gpt2-medium-japanese-upos",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/gpt2-medium-japanese-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [GPT系モデルの系列ラベリングによる品詞付与](http://hdl.handle.net/2433/288964), 東洋学へのコンピュータ利用, 第38回研究セミナー (2024年7月26日), pp.3-10.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa/GPT models
|
KoichiYasuoka/gpt2-small-japanese-wikipedia-juman-upos | KoichiYasuoka | 2025-01-29T03:24:29Z | 5 | 0 | null | [
"pytorch",
"gpt2",
"japanese",
"token-classification",
"pos",
"ja",
"dataset:universal_dependencies",
"base_model:nlp-waseda/gpt2-small-japanese-wikipedia",
"base_model:finetune:nlp-waseda/gpt2-small-japanese-wikipedia",
"license:cc-by-sa-4.0",
"region:us"
] | token-classification | 2024-10-08T07:12:13Z | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
base_model: nlp-waseda/gpt2-small-japanese-wikipedia
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# gpt2-small-japanese-wikipedia-juman-upos
## Model Description
This is a GPT-2 model for POS-tagging, derived from [gpt2-small-japanese-wikipedia](https://huggingface.co/nlp-waseda/gpt2-small-japanese-wikipedia). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import pipeline
nlp=pipeline("upos","KoichiYasuoka/gpt2-small-japanese-wikipedia-juman-upos",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
[fugashi](https://pypi.org/project/fugashi) is required.
|
gavrilstep/b33594b3-ad30-490f-96b6-55d013acf806 | gavrilstep | 2025-01-29T03:24:23Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-29T03:18:52Z | ---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b33594b3-ad30-490f-96b6-55d013acf806
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4183af7169b52347_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4183af7169b52347_train_data.json
type:
field_instruction: theory
field_output: allProofs
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: gavrilstep/b33594b3-ad30-490f-96b6-55d013acf806
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/4183af7169b52347_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 77915825-8e2b-472c-afab-cc2192d1246e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 77915825-8e2b-472c-afab-cc2192d1246e
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b33594b3-ad30-490f-96b6-55d013acf806
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0038 | 1 | 2.0353 |
| 2.1012 | 0.0191 | 5 | 1.8339 |
| 1.6277 | 0.0382 | 10 | 1.2766 |
| 1.083 | 0.0574 | 15 | 1.0797 |
| 1.0813 | 0.0765 | 20 | 1.0027 |
| 0.945 | 0.0956 | 25 | 0.9723 |
| 0.865 | 0.1147 | 30 | 0.9640 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
KoichiYasuoka/modernbert-base-japanese-aozora-luw-upos | KoichiYasuoka | 2025-01-29T03:24:18Z | 27 | 0 | null | [
"pytorch",
"modernbert",
"japanese",
"token-classification",
"pos",
"dependency-parsing",
"custom_code",
"ja",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/modernbert-base-japanese-aozora",
"base_model:finetune:KoichiYasuoka/modernbert-base-japanese-aozora",
"license:apache-2.0",
"region:us"
] | token-classification | 2025-01-12T00:42:47Z | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/modernbert-base-japanese-aozora
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# modernbert-base-japanese-aozora-luw-upos
## Model Description
This is a ModernBERT model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [modernbert-base-japanese-aozora](https://huggingface.co/KoichiYasuoka/modernbert-base-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import pipeline
nlp=pipeline("upos","KoichiYasuoka/modernbert-base-japanese-aozora-luw-upos",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/modernbert-base-japanese-aozora-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa/GPT models
|
KoichiYasuoka/modernbert-base-japanese-wikipedia-luw-upos | KoichiYasuoka | 2025-01-29T03:24:15Z | 18 | 0 | null | [
"pytorch",
"modernbert",
"japanese",
"token-classification",
"pos",
"dependency-parsing",
"custom_code",
"ja",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/modernbert-base-japanese-wikipedia",
"base_model:finetune:KoichiYasuoka/modernbert-base-japanese-wikipedia",
"license:apache-2.0",
"region:us"
] | token-classification | 2025-01-12T08:52:41Z | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/modernbert-base-japanese-wikipedia
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# modernbert-base-japanese-wikipedia-luw-upos
## Model Description
This is a ModernBERT model pre-trained on Japanese Wikipedia and 青空文庫 texts for POS-tagging and dependency-parsing, derived from [modernbert-base-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/modernbert-base-japanese-wikipedia). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import pipeline
nlp=pipeline("upos","KoichiYasuoka/modernbert-base-japanese-wikipedia-luw-upos",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/modernbert-base-japanese-wikipedia-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa/GPT models
|
Subsets and Splits