modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
sergioalves/38ecf603-698a-4da2-b1d6-fb4bfdc840a1 | sergioalves | 2025-01-10T15:19:18Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/Sheared-LLaMA-1.3B",
"base_model:adapter:princeton-nlp/Sheared-LLaMA-1.3B",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T14:57:49Z | ---
library_name: peft
license: apache-2.0
base_model: princeton-nlp/Sheared-LLaMA-1.3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 38ecf603-698a-4da2-b1d6-fb4bfdc840a1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/Sheared-LLaMA-1.3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e266add1a4abf13d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e266add1a4abf13d_train_data.json
type:
field_input: teasertext
field_instruction: title
field_output: content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: sergioalves/38ecf603-698a-4da2-b1d6-fb4bfdc840a1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/e266add1a4abf13d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7b6fcdb3-3506-4e5e-a34f-9ee9c13c3ac0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7b6fcdb3-3506-4e5e-a34f-9ee9c13c3ac0
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 38ecf603-698a-4da2-b1d6-fb4bfdc840a1
This model is a fine-tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0008 | 8 | nan |
| 0.0 | 0.0015 | 16 | nan |
| 0.0 | 0.0023 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hrasto/llamas2_tok_l0 | hrasto | 2025-01-10T15:19:02Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T14:14:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-GGUF | mradermacher | 2025-01-10T15:18:36Z | 354 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:AIDX-ktds/ktdsbaseLM-v0.14-onbased-llama3.1",
"base_model:quantized:AIDX-ktds/ktdsbaseLM-v0.14-onbased-llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-10T14:02:22Z | ---
base_model: AIDX-ktds/ktdsbaseLM-v0.14-onbased-llama3.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AIDX-ktds/ktdsbaseLM-v0.14-onbased-llama3.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.14-onbased-llama3.1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.14-onbased-llama3.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.14-onbased-llama3.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.14-onbased-llama3.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.14-onbased-llama3.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.14-onbased-llama3.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.14-onbased-llama3.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.14-onbased-llama3.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.14-onbased-llama3.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.14-onbased-llama3.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.14-onbased-llama3.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.14-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.14-onbased-llama3.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
SotirisLegkas/KaLlamaki-stage-2-step-5000 | SotirisLegkas | 2025-01-10T15:12:48Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T13:00:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leir1a/SD_mix_models | leir1a | 2025-01-10T15:09:21Z | 0 | 1 | null | [
"license:cc",
"region:us"
] | null | 2023-04-16T08:49:03Z | ---
license: cc
---
## Model Details
simply mixed t2i models. they were tuned toward high quality 2D CG, affected from elysium_anime.
|
lesso11/0e4b6b7a-ce82-40cd-98cf-3319eca6ef33 | lesso11 | 2025-01-10T15:08:27Z | 10 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:adapter:EleutherAI/pythia-14m",
"region:us"
] | null | 2025-01-10T15:08:02Z | ---
library_name: peft
base_model: EleutherAI/pythia-14m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0e4b6b7a-ce82-40cd-98cf-3319eca6ef33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-14m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bd973ab324d4224d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bd973ab324d4224d_train_data.json
type:
field_input: facts
field_instruction: decomposition
field_output: question
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso11/0e4b6b7a-ce82-40cd-98cf-3319eca6ef33
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/bd973ab324d4224d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c5ad883a-2b32-4910-9122-e0e2ac1a5647
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c5ad883a-2b32-4910-9122-e0e2ac1a5647
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0e4b6b7a-ce82-40cd-98cf-3319eca6ef33
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 32.6044 | 0.0040 | 1 | 7.3110 |
| 28.0135 | 0.0121 | 3 | 7.5116 |
| 30.7733 | 0.0242 | 6 | 7.4168 |
| 29.6309 | 0.0363 | 9 | 7.2528 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Qwen2-VL-7B-Instruct-abliterated-Q5_K_S-GGUF | Triangle104 | 2025-01-10T15:06:51Z | 37 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:huihui-ai/Qwen2-VL-7B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2-VL-7B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-01-10T15:06:25Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
base_model: huihui-ai/Qwen2-VL-7B-Instruct-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2-VL-7B-Instruct-abliterated-Q5_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2-VL-7B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Qwen2-VL-7B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2-VL-7B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2-VL-7B-Instruct-abliterated-Q5_K_S-GGUF --hf-file qwen2-vl-7b-instruct-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2-VL-7B-Instruct-abliterated-Q5_K_S-GGUF --hf-file qwen2-vl-7b-instruct-abliterated-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2-VL-7B-Instruct-abliterated-Q5_K_S-GGUF --hf-file qwen2-vl-7b-instruct-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2-VL-7B-Instruct-abliterated-Q5_K_S-GGUF --hf-file qwen2-vl-7b-instruct-abliterated-q5_k_s.gguf -c 2048
```
|
diaenra/569982f4-a07e-4108-9f79-9a7516bb3e40 | diaenra | 2025-01-10T15:02:41Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-14B-Chat",
"base_model:adapter:Qwen/Qwen1.5-14B-Chat",
"license:other",
"region:us"
] | null | 2025-01-10T13:56:32Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-14B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 569982f4-a07e-4108-9f79-9a7516bb3e40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-14B-Chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 53946c553452bc08_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/53946c553452bc08_train_data.json
type:
field_input: ''
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: diaenra/569982f4-a07e-4108-9f79-9a7516bb3e40
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_modules_to_save:
- embed_tokens
- lm_head
lora_r: 32
lora_target_linear: true
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
lr_scheduler: cosine
max_memory:
0: 70GB
micro_batch_size: 4
mlflow_experiment_name: /tmp/53946c553452bc08_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 239
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: diaenra-tao-miner
wandb_mode: online
wandb_name: f6c2143c-2d86-4bdb-ac98-286dccd7e8c7
wandb_project: tao
wandb_run: diaenra
wandb_runid: f6c2143c-2d86-4bdb-ac98-286dccd7e8c7
warmup_steps: 100
weight_decay: 0.1
xformers_attention: true
```
</details><br>
# 569982f4-a07e-4108-9f79-9a7516bb3e40
This model is a fine-tuned version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6318 | 1.0 | 499 | 0.6477 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
smokxy/sdssd-quantized | smokxy | 2025-01-10T15:00:40Z | 7 | 0 | optimum | [
"optimum",
"safetensors",
"bert",
"quantized",
"ner",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T15:00:16Z | ---
tags:
- quantized
- ner
- 4-bit
library_name: optimum
---
# Model - sdssd-quantized
This model has been optimized and uploaded to the HuggingFace Hub.
## Model Details
- Original Repository: sdssd-quantized
- Optimization Tags: quantized, ner, 4-bit
|
trenden/4655dbb4-1139-460e-872f-8b937e74883b | trenden | 2025-01-10T14:59:52Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-7b-hf",
"base_model:adapter:NousResearch/CodeLlama-7b-hf",
"region:us"
] | null | 2025-01-10T14:00:18Z | ---
library_name: peft
base_model: NousResearch/CodeLlama-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4655dbb4-1139-460e-872f-8b937e74883b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-7b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 824d58f9981ea803_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/824d58f9981ea803_train_data.json
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/4655dbb4-1139-460e-872f-8b937e74883b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/824d58f9981ea803_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0e774f43-da40-499c-a40a-68ac277e959f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0e774f43-da40-499c-a40a-68ac277e959f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4655dbb4-1139-460e-872f-8b937e74883b
This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf](https://huggingface.co/NousResearch/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.4271 | 0.0000 | 1 | 1.2013 |
| 5.1587 | 0.0001 | 3 | 1.1995 |
| 5.5428 | 0.0002 | 6 | 1.1662 |
| 4.6788 | 0.0003 | 9 | 1.0517 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Dawid511/speecht5_finetuned_librispeech_polish_epo6_batch4_gas2 | Dawid511 | 2025-01-10T14:58:31Z | 18 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-01-10T14:35:38Z | ---
library_name: transformers
license: mit
base_model: dawid511/speecht5_finetuned_librispeech_polish_epo6_batch2_gas4
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_librispeech_polish_epo6_batch4_gas2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_librispeech_polish_epo6_batch4_gas2
This model is a fine-tuned version of [dawid511/speecht5_finetuned_librispeech_polish_epo6_batch2_gas4](https://huggingface.co/dawid511/speecht5_finetuned_librispeech_polish_epo6_batch2_gas4) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7593 | 0.5115 | 100 | 0.3747 |
| 0.786 | 1.0205 | 200 | 0.3787 |
| 0.7781 | 1.5320 | 300 | 0.3732 |
| 0.767 | 2.0409 | 400 | 0.3830 |
| 0.7654 | 2.5524 | 500 | 0.3753 |
| 0.7471 | 3.0614 | 600 | 0.3697 |
| 0.7531 | 3.5729 | 700 | 0.3672 |
| 0.7365 | 4.0818 | 800 | 0.3716 |
| 0.7385 | 4.5934 | 900 | 0.3674 |
| 0.7218 | 5.1023 | 1000 | 0.3692 |
| 0.7326 | 5.6138 | 1100 | 0.3653 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Triangle104/Qwen2-VL-7B-Instruct-abliterated-Q4_K_S-GGUF | Triangle104 | 2025-01-10T14:57:44Z | 32 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:huihui-ai/Qwen2-VL-7B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2-VL-7B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-01-10T14:57:22Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
base_model: huihui-ai/Qwen2-VL-7B-Instruct-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2-VL-7B-Instruct-abliterated-Q4_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2-VL-7B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Qwen2-VL-7B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2-VL-7B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2-VL-7B-Instruct-abliterated-Q4_K_S-GGUF --hf-file qwen2-vl-7b-instruct-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2-VL-7B-Instruct-abliterated-Q4_K_S-GGUF --hf-file qwen2-vl-7b-instruct-abliterated-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2-VL-7B-Instruct-abliterated-Q4_K_S-GGUF --hf-file qwen2-vl-7b-instruct-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2-VL-7B-Instruct-abliterated-Q4_K_S-GGUF --hf-file qwen2-vl-7b-instruct-abliterated-q4_k_s.gguf -c 2048
```
|
mradermacher/Llama-2-7b-samsum-i1-GGUF | mradermacher | 2025-01-10T14:57:28Z | 625 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:samsum",
"base_model:SalmanFaroz/Llama-2-7b-samsum",
"base_model:quantized:SalmanFaroz/Llama-2-7b-samsum",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-01-10T14:15:35Z | ---
base_model: SalmanFaroz/Llama-2-7b-samsum
datasets:
- samsum
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/SalmanFaroz/Llama-2-7b-samsum
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-2-7b-samsum-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-IQ2_M.gguf) | i1-IQ2_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-IQ3_M.gguf) | i1-IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-Q4_1.gguf) | i1-Q4_1 | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-samsum-i1-GGUF/resolve/main/Llama-2-7b-samsum.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
smokxy/sas-quantized | smokxy | 2025-01-10T14:57:26Z | 68 | 0 | optimum | [
"optimum",
"safetensors",
"bert",
"quantized",
"ner",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T14:57:00Z | ---
tags:
- quantized
- ner
- 4-bit
library_name: optimum
---
# Model - sas-quantized
This model has been optimized and uploaded to the HuggingFace Hub.
## Model Details
- Original Repository: sas-quantized
- Optimization Tags: quantized, ner, 4-bit
|
amj808/casino-search-query-intent-classifier-quantized | amj808 | 2025-01-10T14:53:38Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-10T06:02:07Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: casino-search-query-intent-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# casino-search-query-intent-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3191
- Accuracy: 0.9922
- F1: 0.9922
- Precision: 0.9923
- Recall: 0.9922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 169 | 1.0933 | 0.4119 | 0.3048 | 0.4485 | 0.4119 |
| No log | 2.0 | 338 | 0.9162 | 0.6516 | 0.5873 | 0.7319 | 0.6516 |
| 0.9908 | 3.0 | 507 | 0.7124 | 0.7759 | 0.7617 | 0.8035 | 0.7759 |
| 0.9908 | 4.0 | 676 | 0.5941 | 0.9275 | 0.9270 | 0.9291 | 0.9275 |
| 0.9908 | 5.0 | 845 | 0.5042 | 0.9611 | 0.9610 | 0.9619 | 0.9611 |
| 0.5708 | 6.0 | 1014 | 0.3974 | 0.9832 | 0.9831 | 0.9833 | 0.9832 |
| 0.5708 | 7.0 | 1183 | 0.3191 | 0.9922 | 0.9922 | 0.9923 | 0.9922 |
| 0.5708 | 8.0 | 1352 | 0.2709 | 0.9922 | 0.9922 | 0.9923 | 0.9922 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1.post8
- Datasets 2.21.0
- Tokenizers 0.21.0
|
Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q8_0-GGUF | Triangle104 | 2025-01-10T14:52:40Z | 28 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:huihui-ai/Qwen2-VL-2B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2-VL-2B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-01-10T14:52:30Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
base_model: huihui-ai/Qwen2-VL-2B-Instruct-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q8_0-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2-VL-2B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q8_0-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q8_0-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q8_0-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q8_0-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q8_0.gguf -c 2048
```
|
Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q6_K-GGUF | Triangle104 | 2025-01-10T14:51:47Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:huihui-ai/Qwen2-VL-2B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2-VL-2B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-01-10T14:51:38Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
base_model: huihui-ai/Qwen2-VL-2B-Instruct-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q6_K-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2-VL-2B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q6_K-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q6_K-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q6_K-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q6_K-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q6_k.gguf -c 2048
```
|
ANGKJ1995/my_awesome_model | ANGKJ1995 | 2025-01-10T14:50:22Z | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-01-10T14:50:10Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.3074
- eval_model_preparation_time: 0.0018
- eval_accuracy: 0.7619
- eval_runtime: 0.2024
- eval_samples_per_second: 207.561
- eval_steps_per_second: 9.884
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q5_K_S-GGUF | Triangle104 | 2025-01-10T14:48:37Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:huihui-ai/Qwen2-VL-2B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2-VL-2B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-01-10T14:48:29Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
base_model: huihui-ai/Qwen2-VL-2B-Instruct-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q5_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2-VL-2B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q5_K_S-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q5_K_S-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q5_K_S-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q5_K_S-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q5_k_s.gguf -c 2048
```
|
dragoa/mistral-finetuned-rulexDoc | dragoa | 2025-01-10T14:47:40Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-12-11T16:48:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q4_K_M-GGUF | Triangle104 | 2025-01-10T14:46:22Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:huihui-ai/Qwen2-VL-2B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2-VL-2B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-01-10T14:46:15Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
base_model: huihui-ai/Qwen2-VL-2B-Instruct-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2-VL-2B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q4_K_M-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q4_K_M-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q4_K_M-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q4_K_M-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q4_k_m.gguf -c 2048
```
|
sanaridas/query_classifier | sanaridas | 2025-01-10T14:45:56Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T14:38:21Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hugggof/vampnetv2-d774-l8-h8-mode-vampnet_rms-hchroma-latest | hugggof | 2025-01-10T14:44:59Z | 63 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-01-10T14:44:45Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q4_K_S-GGUF | Triangle104 | 2025-01-10T14:44:30Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:huihui-ai/Qwen2-VL-2B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2-VL-2B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-01-10T14:44:23Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
base_model: huihui-ai/Qwen2-VL-2B-Instruct-abliterated
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q4_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2-VL-2B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2-VL-2B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q4_K_S-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q4_K_S-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q4_K_S-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2-VL-2B-Instruct-abliterated-Q4_K_S-GGUF --hf-file qwen2-vl-2b-instruct-abliterated-q4_k_s.gguf -c 2048
```
|
bfuzzy1/acheron-m | bfuzzy1 | 2025-01-10T14:42:19Z | 204 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"conversational",
"dataset:eth-dl-rewards/math-problems-for-sft",
"base_model:bfuzzy1/acheron-d",
"base_model:finetune:bfuzzy1/acheron-d",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T14:19:33Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
base_model: bfuzzy1/acheron-d
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- eth-dl-rewards/math-problems-for-sft
---
# The M is for Math.
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_path = "bfuzzy1/acheron-m"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto',
trust_remote_code=True
)
messages = [
{"role": "user", "content": "What's 2 + 2 -3?"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(
input_ids.to('mps' if torch.backends.mps.is_available() else 'cpu'),
max_new_tokens=100
)
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
print(response)
``` |
adammandic87/0aca886d-9c59-4588-9109-7bc028c1412d | adammandic87 | 2025-01-10T14:40:51Z | 11 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T14:40:18Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0aca886d-9c59-4588-9109-7bc028c1412d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5fa650980024d17c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5fa650980024d17c_train_data.json
type:
field_input: rejected
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/0aca886d-9c59-4588-9109-7bc028c1412d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/5fa650980024d17c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 70d9c003-ae9c-4efa-949d-2650dfd80aa8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 70d9c003-ae9c-4efa-949d-2650dfd80aa8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0aca886d-9c59-4588-9109-7bc028c1412d
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 228.3511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1214.6569 | 0.0021 | 1 | 228.3396 |
| 774.5907 | 0.0062 | 3 | 228.4223 |
| 814.6666 | 0.0125 | 6 | 228.3269 |
| 856.3652 | 0.0187 | 9 | 228.3511 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Qwen2.5-Coder-3B-Instruct-Q6_K-GGUF | Triangle104 | 2025-01-10T14:39:37Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"arxiv:2409.12186",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-Coder-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-3B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-11-19T11:30:46Z | ---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-Coder-3B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen2.5-Coder-3B-Instruct-Q6_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) for more details on the model.
---
Model details:
-
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
Significantly improvements in code generation, code reasoning and code fixing. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
This repo contains the instruction-tuned 3B Qwen2.5-Coder model, which has the following features:
Type: Causal Language Models
Training Stage: Pretraining & Post-training
Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
Number of Parameters: 3.09B
Number of Paramaters (Non-Embedding): 2.77B
Number of Layers: 36
Number of Attention Heads (GQA): 16 for Q and 2 for KV
Context Length: Full 32,768 tokens
For more details, please refer to our blog, GitHub, Documentation, Arxiv.
Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face transformers and we advise you to use the latest version of transformers.
With transformers<4.37.0, you will encounter the following error:
KeyError: 'qwen2'
Quickstart
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-Coder-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "write a quick sort algorithm."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Evaluation & Performance
Detailed evaluation results are reported in this π blog.
For requirements on GPU memory and the respective throughput, see results here.
Citation
If you find our work helpful, feel free to give us a cite.
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen2.5-Coder-3B-Instruct-Q6_K-GGUF --hf-file qwen2.5-coder-3b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen2.5-Coder-3B-Instruct-Q6_K-GGUF --hf-file qwen2.5-coder-3b-instruct-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen2.5-Coder-3B-Instruct-Q6_K-GGUF --hf-file qwen2.5-coder-3b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen2.5-Coder-3B-Instruct-Q6_K-GGUF --hf-file qwen2.5-coder-3b-instruct-q6_k.gguf -c 2048
```
|
chauhoang/349e4d8d-216e-d6a2-0b68-14e1ef6cf06a | chauhoang | 2025-01-10T14:38:09Z | 6 | 0 | peft | [
"peft",
"safetensors",
"mixtral",
"axolotl",
"generated_from_trainer",
"base_model:TitanML/tiny-mixtral",
"base_model:adapter:TitanML/tiny-mixtral",
"region:us"
] | null | 2025-01-10T14:32:39Z | ---
library_name: peft
base_model: TitanML/tiny-mixtral
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 349e4d8d-216e-d6a2-0b68-14e1ef6cf06a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TitanML/tiny-mixtral
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 78128c39f61f0439_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/78128c39f61f0439_train_data.json
type:
field_input: system
field_instruction: question
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/349e4d8d-216e-d6a2-0b68-14e1ef6cf06a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/78128c39f61f0439_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 41887ad1-2b68-4efa-9c77-ff1e2c1cd4b4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 41887ad1-2b68-4efa-9c77-ff1e2c1cd4b4
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 349e4d8d-216e-d6a2-0b68-14e1ef6cf06a
This model is a fine-tuned version of [TitanML/tiny-mixtral](https://huggingface.co/TitanML/tiny-mixtral) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0014 | 10 | nan |
| 0.0 | 0.0028 | 20 | nan |
| 0.0 | 0.0042 | 30 | nan |
| 0.0 | 0.0056 | 40 | nan |
| 0.0 | 0.0070 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
fueteruyo/rena2 | fueteruyo | 2025-01-10T14:37:49Z | 11 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-10T14:07:07Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: rena2
---
# Rena2
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `rena2` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('fueteruyo/rena2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
dundurlunka/donyo_donev_cropped_LoRA | dundurlunka | 2025-01-10T14:35:20Z | 16 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-12-23T13:12:17Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: in the style of TOK
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - dundurlunka/donyo_donev_cropped_LoRA
<Gallery />
## Model description
These are dundurlunka/donyo_donev_cropped_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use in the style of TOK to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](dundurlunka/donyo_donev_cropped_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
saldanhacl/myself | saldanhacl | 2025-01-10T14:34:49Z | 25 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-10T13:55:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: lucc
---
# Myself
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `lucc` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('saldanhacl/myself', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
tinh2312/SignBart-KArSL-ALL-100 | tinh2312 | 2025-01-10T14:34:10Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-01-10T14:34:06Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: SignBart-KArSL-ALL-100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SignBart-KArSL-ALL-100
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0278
- Accuracy: 0.9942
- Precision: 0.9947
- Recall: 0.9942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:----:|:--------:|:---------------:|:---------:|:------:|
| 5.1417 | 1.0 | 50 | 0.1175 | 3.9188 | 0.0961 | 0.1175 |
| 4.323 | 2.0 | 100 | 0.3996 | 2.7383 | 0.3779 | 0.3996 |
| 3.5125 | 3.0 | 150 | 0.6454 | 1.9336 | 0.6952 | 0.6454 |
| 2.9748 | 4.0 | 200 | 0.7933 | 1.3981 | 0.8114 | 0.7933 |
| 2.6005 | 5.0 | 250 | 0.8417 | 1.0641 | 0.8606 | 0.8417 |
| 2.2397 | 6.0 | 300 | 0.8779 | 0.8145 | 0.8996 | 0.8779 |
| 1.9058 | 7.0 | 350 | 0.9108 | 0.6322 | 0.9222 | 0.9108 |
| 1.7861 | 8.0 | 400 | 0.9404 | 0.5031 | 0.9477 | 0.9404 |
| 1.5948 | 9.0 | 450 | 0.9442 | 0.4105 | 0.9510 | 0.9442 |
| 1.5219 | 10.0 | 500 | 0.9546 | 0.3455 | 0.9613 | 0.9546 |
| 1.3415 | 11.0 | 550 | 0.9654 | 0.2813 | 0.9695 | 0.9654 |
| 1.2171 | 12.0 | 600 | 0.9629 | 0.2456 | 0.9671 | 0.9629 |
| 1.0959 | 13.0 | 650 | 0.975 | 0.1969 | 0.9771 | 0.975 |
| 1.0067 | 14.0 | 700 | 0.9792 | 0.1725 | 0.9809 | 0.9792 |
| 1.0844 | 15.0 | 750 | 0.9821 | 0.1507 | 0.9836 | 0.9821 |
| 0.931 | 16.0 | 800 | 0.9854 | 0.1307 | 0.9866 | 0.9854 |
| 0.8038 | 17.0 | 850 | 0.9858 | 0.1202 | 0.9870 | 0.9858 |
| 0.8623 | 18.0 | 900 | 0.9842 | 0.1083 | 0.9851 | 0.9842 |
| 0.7439 | 19.0 | 950 | 0.9879 | 0.0987 | 0.9891 | 0.9879 |
| 0.7537 | 20.0 | 1000 | 0.9892 | 0.0912 | 0.9903 | 0.9892 |
| 0.599 | 21.0 | 1050 | 0.9908 | 0.0788 | 0.9917 | 0.9908 |
| 0.6198 | 22.0 | 1100 | 0.9904 | 0.0711 | 0.9913 | 0.9904 |
| 0.5669 | 23.0 | 1150 | 0.9917 | 0.0663 | 0.9925 | 0.9917 |
| 0.5134 | 24.0 | 1200 | 0.9904 | 0.0630 | 0.9913 | 0.9904 |
| 0.5558 | 25.0 | 1250 | 0.99 | 0.0575 | 0.9909 | 0.99 |
| 0.5118 | 26.0 | 1300 | 0.9912 | 0.0589 | 0.9920 | 0.9912 |
| 0.5522 | 27.0 | 1350 | 0.9904 | 0.0517 | 0.9913 | 0.9904 |
| 0.4916 | 28.0 | 1400 | 0.9912 | 0.0487 | 0.9920 | 0.9912 |
| 0.3872 | 29.0 | 1450 | 0.9912 | 0.0440 | 0.9921 | 0.9912 |
| 0.4532 | 30.0 | 1500 | 0.9917 | 0.0464 | 0.9924 | 0.9917 |
| 0.4277 | 31.0 | 1550 | 0.9912 | 0.0408 | 0.9921 | 0.9912 |
| 0.4723 | 32.0 | 1600 | 0.9921 | 0.0378 | 0.9927 | 0.9921 |
| 0.3774 | 33.0 | 1650 | 0.9929 | 0.0351 | 0.9936 | 0.9929 |
| 0.3451 | 34.0 | 1700 | 0.9929 | 0.0368 | 0.9936 | 0.9929 |
| 0.3106 | 35.0 | 1750 | 0.9933 | 0.0349 | 0.9938 | 0.9933 |
| 0.2933 | 36.0 | 1800 | 0.9921 | 0.0364 | 0.9928 | 0.9921 |
| 0.2468 | 37.0 | 1850 | 0.9912 | 0.0369 | 0.9920 | 0.9912 |
| 0.461 | 38.0 | 1900 | 0.9921 | 0.0312 | 0.9928 | 0.9921 |
| 0.2706 | 39.0 | 1950 | 0.9933 | 0.0319 | 0.9939 | 0.9933 |
| 0.2784 | 40.0 | 2000 | 0.9925 | 0.0306 | 0.9932 | 0.9925 |
| 0.3167 | 41.0 | 2050 | 0.9929 | 0.0314 | 0.9936 | 0.9929 |
| 0.2242 | 42.0 | 2100 | 0.9929 | 0.0319 | 0.9936 | 0.9929 |
| 0.2439 | 43.0 | 2150 | 0.9929 | 0.0324 | 0.9937 | 0.9929 |
| 0.1995 | 44.0 | 2200 | 0.9938 | 0.0267 | 0.9943 | 0.9938 |
| 0.2178 | 45.0 | 2250 | 0.0273 | 0.9925 | 0.9932 | 0.9925 |
| 0.3018 | 46.0 | 2300 | 0.0281 | 0.9938 | 0.9943 | 0.9938 |
| 0.3096 | 47.0 | 2350 | 0.0285 | 0.9942 | 0.9948 | 0.9942 |
| 0.2636 | 48.0 | 2400 | 0.0261 | 0.9933 | 0.9941 | 0.9933 |
| 0.2441 | 49.0 | 2450 | 0.0233 | 0.9929 | 0.9937 | 0.9929 |
| 0.2102 | 50.0 | 2500 | 0.0255 | 0.9929 | 0.9935 | 0.9929 |
| 0.2302 | 51.0 | 2550 | 0.0268 | 0.9921 | 0.9928 | 0.9921 |
| 0.1548 | 52.0 | 2600 | 0.0251 | 0.9929 | 0.9935 | 0.9929 |
| 0.2293 | 53.0 | 2650 | 0.0264 | 0.9925 | 0.9932 | 0.9925 |
| 0.199 | 54.0 | 2700 | 0.0278 | 0.9942 | 0.9947 | 0.9942 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
lesso11/64fe75bc-328c-4e02-aba4-59885360b872 | lesso11 | 2025-01-10T14:33:48Z | 13 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T14:33:13Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 64fe75bc-328c-4e02-aba4-59885360b872
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: true
chat_template: llama3
datasets:
- data_files:
- 5fa650980024d17c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5fa650980024d17c_train_data.json
type:
field_input: rejected
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso11/64fe75bc-328c-4e02-aba4-59885360b872
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 77GiB
max_steps: 50
micro_batch_size: 8
mlflow_experiment_name: /tmp/5fa650980024d17c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 70d9c003-ae9c-4efa-949d-2650dfd80aa8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 70d9c003-ae9c-4efa-949d-2650dfd80aa8
warmup_steps: 10
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# 64fe75bc-328c-4e02-aba4-59885360b872
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.9798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 17.0013 | 0.0041 | 1 | 9.2526 |
| 18.6078 | 0.0207 | 5 | 9.2333 |
| 17.987 | 0.0415 | 10 | 9.1727 |
| 16.0464 | 0.0622 | 15 | 9.1243 |
| 17.1351 | 0.0830 | 20 | 9.0388 |
| 16.808 | 0.1037 | 25 | 9.0161 |
| 17.159 | 0.1245 | 30 | 9.0357 |
| 17.1988 | 0.1452 | 35 | 9.0004 |
| 17.7996 | 0.1660 | 40 | 8.9848 |
| 19.0675 | 0.1867 | 45 | 8.9646 |
| 17.9008 | 0.2075 | 50 | 8.9798 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
filipesantoscv11/aa9f02f0-d275-45dc-bf75-0fad7cf2dace | filipesantoscv11 | 2025-01-10T14:33:33Z | 8 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T14:33:06Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aa9f02f0-d275-45dc-bf75-0fad7cf2dace
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5fa650980024d17c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5fa650980024d17c_train_data.json
type:
field_input: rejected
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: filipesantoscv11/aa9f02f0-d275-45dc-bf75-0fad7cf2dace
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/5fa650980024d17c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 70d9c003-ae9c-4efa-949d-2650dfd80aa8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 70d9c003-ae9c-4efa-949d-2650dfd80aa8
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# aa9f02f0-d275-45dc-bf75-0fad7cf2dace
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.7210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0021 | 1 | 9.9166 |
| 41.2044 | 0.0166 | 8 | 9.8815 |
| 34.3414 | 0.0332 | 16 | 9.7777 |
| 36.2179 | 0.0499 | 24 | 9.7210 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
chauhoang/7a350c97-86f1-2489-eb69-5c501cb910b8 | chauhoang | 2025-01-10T14:31:35Z | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama-chat",
"base_model:adapter:unsloth/tinyllama-chat",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T12:46:54Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7a350c97-86f1-2489-eb69-5c501cb910b8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama-chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ada7442f6e923a8b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ada7442f6e923a8b_train_data.json
type:
field_input: categories
field_instruction: abstract
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/7a350c97-86f1-2489-eb69-5c501cb910b8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/ada7442f6e923a8b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 82f898a5-fada-40e9-88a0-24569774a8be
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 82f898a5-fada-40e9-88a0-24569774a8be
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7a350c97-86f1-2489-eb69-5c501cb910b8
This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 3.0002 |
| 3.0485 | 0.0002 | 10 | 2.7228 |
| 2.266 | 0.0003 | 20 | 1.9589 |
| 1.7736 | 0.0005 | 30 | 1.7927 |
| 1.6886 | 0.0006 | 40 | 1.7530 |
| 1.5719 | 0.0008 | 50 | 1.7476 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/phi-4-abliterated-Q6_K-GGUF | Triangle104 | 2025-01-10T14:31:21Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/phi-4-abliterated",
"base_model:quantized:huihui-ai/phi-4-abliterated",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T14:30:30Z | ---
license: mit
license_link: https://huggingface.co/huihui-ai/phi-4-abliterated/resolve/main/LICENSE
language:
- en
base_model: huihui-ai/phi-4-abliterated
pipeline_tag: text-generation
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0
widget:
- messages:
- role: user
content: How should I explain the Internet?
library_name: transformers
---
# Triangle104/phi-4-abliterated-Q6_K-GGUF
This model was converted to GGUF format from [`huihui-ai/phi-4-abliterated`](https://huggingface.co/huihui-ai/phi-4-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/phi-4-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/phi-4-abliterated-Q6_K-GGUF --hf-file phi-4-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/phi-4-abliterated-Q6_K-GGUF --hf-file phi-4-abliterated-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/phi-4-abliterated-Q6_K-GGUF --hf-file phi-4-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/phi-4-abliterated-Q6_K-GGUF --hf-file phi-4-abliterated-q6_k.gguf -c 2048
```
|
phxia/gpt2_adapter | phxia | 2025-01-10T14:30:13Z | 12 | 0 | peft | [
"peft",
"safetensors",
"text-generation",
"arxiv:1910.09700",
"base_model:phxia/gpt2",
"base_model:adapter:phxia/gpt2",
"region:us"
] | text-generation | 2024-12-31T15:55:39Z | ---
library_name: peft
tags:
- peft
- text-generation
pipeline_tag: text-generation
base_model:
- phxia/gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Legalaz/15_llambo2_09_21 | Legalaz | 2025-01-10T14:27:41Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T14:23:43Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# top
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* /root/top2
* /root/top1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /root/top2
parameters:
weight: 0.9705
- model: /root/top1
parameters:
weight: 0.0628
merge_method: linear
dtype: bfloat16
```
|
John6666/sensual-mind-pixelover-v20-sdxl | John6666 | 2025-01-10T14:26:50Z | 453 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"bright",
"sharp",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-01-10T14:19:08Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- bright
- sharp
---
Original model is [here](https://civitai.com/models/986666?modelVersionId=1264551).
This model created by [SensualMind](https://civitai.com/user/SensualMind).
|
Triangle104/phi-4-abliterated-Q5_K_M-GGUF | Triangle104 | 2025-01-10T14:26:27Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/phi-4-abliterated",
"base_model:quantized:huihui-ai/phi-4-abliterated",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T14:25:41Z | ---
license: mit
license_link: https://huggingface.co/huihui-ai/phi-4-abliterated/resolve/main/LICENSE
language:
- en
base_model: huihui-ai/phi-4-abliterated
pipeline_tag: text-generation
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0
widget:
- messages:
- role: user
content: How should I explain the Internet?
library_name: transformers
---
# Triangle104/phi-4-abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/phi-4-abliterated`](https://huggingface.co/huihui-ai/phi-4-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/phi-4-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/phi-4-abliterated-Q5_K_M-GGUF --hf-file phi-4-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/phi-4-abliterated-Q5_K_M-GGUF --hf-file phi-4-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/phi-4-abliterated-Q5_K_M-GGUF --hf-file phi-4-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/phi-4-abliterated-Q5_K_M-GGUF --hf-file phi-4-abliterated-q5_k_m.gguf -c 2048
```
|
smokxy/sadxds-quantized | smokxy | 2025-01-10T14:20:49Z | 7 | 0 | optimum | [
"optimum",
"safetensors",
"bert",
"quantized",
"ner",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T14:20:18Z | ---
tags:
- quantized
- ner
- 8-bit
library_name: optimum
---
# Model - sadxds-quantized
This model has been optimized and uploaded to the HuggingFace Hub.
## Model Details
- Original Repository: sadxds-quantized
- Optimization Tags: quantized, ner, 8-bit
|
denbeo/8d892612-d232-4281-a1d2-1b0a0f6e0dcd | denbeo | 2025-01-10T14:20:27Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M-Instruct",
"base_model:adapter:unsloth/SmolLM-360M-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T14:05:06Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8d892612-d232-4281-a1d2-1b0a0f6e0dcd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a9aee418597e9eaf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a9aee418597e9eaf_train_data.json
type:
field_input: genres
field_instruction: title
field_output: description
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: denbeo/8d892612-d232-4281-a1d2-1b0a0f6e0dcd
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/a9aee418597e9eaf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 990847b3-2b66-4b22-b549-a753ee8c0b65
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 990847b3-2b66-4b22-b549-a753ee8c0b65
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8d892612-d232-4281-a1d2-1b0a0f6e0dcd
This model is a fine-tuned version of [unsloth/SmolLM-360M-Instruct](https://huggingface.co/unsloth/SmolLM-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.1429 | 0.0313 | 200 | 2.9984 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tuanna08go/b7232842-5121-b43c-ca0d-91094982e237 | tuanna08go | 2025-01-10T14:20:26Z | 12 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T14:08:49Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b7232842-5121-b43c-ca0d-91094982e237
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 96842f7ffda08476_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/96842f7ffda08476_train_data.json
type:
field_input: bot_description
field_instruction: bot_name
field_output: orig_bot_description
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/b7232842-5121-b43c-ca0d-91094982e237
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/96842f7ffda08476_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d202aa92-5d43-4c6f-876b-1fb97191d72f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d202aa92-5d43-4c6f-876b-1fb97191d72f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b7232842-5121-b43c-ca0d-91094982e237
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.3](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0013 | 1 | 1.2409 |
| 4.7519 | 0.0131 | 10 | 1.0085 |
| 3.5907 | 0.0262 | 20 | 0.8609 |
| 3.0473 | 0.0393 | 30 | 0.8097 |
| 3.156 | 0.0524 | 40 | 0.7848 |
| 3.7723 | 0.0656 | 50 | 0.7799 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
VERSIL91/a76eefd0-5d33-4038-a018-7c42b4d6924a | VERSIL91 | 2025-01-10T14:16:16Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | 2025-01-10T14:04:06Z | ---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a76eefd0-5d33-4038-a018-7c42b4d6924a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: huggyllama/llama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1ade7b2e9d8ab2ce_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1ade7b2e9d8ab2ce_train_data.json
type:
field_instruction: issue_body
field_output: issue_title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/a76eefd0-5d33-4038-a018-7c42b4d6924a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/1ade7b2e9d8ab2ce_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a76eefd0-5d33-4038-a018-7c42b4d6924a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a76eefd0-5d33-4038-a018-7c42b4d6924a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a76eefd0-5d33-4038-a018-7c42b4d6924a
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0070 | 1 | nan |
| 0.0 | 0.0348 | 5 | nan |
| 0.0 | 0.0695 | 10 | nan |
| 0.0 | 0.1043 | 15 | nan |
| 0.0 | 0.1391 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
fahd200581/AISHAAMAI | fahd200581 | 2025-01-10T14:16:16Z | 19 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-10T13:44:22Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AISHAAMAI
---
# Aishaamai
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AISHAAMAI` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('fahd200581/AISHAAMAI', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
StefaniaCri/mbart_romainian_to_emoji | StefaniaCri | 2025-01-10T14:14:49Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-01-02T11:43:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RoboSG/js-fake-bach-epochs20 | RoboSG | 2025-01-10T14:13:55Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T09:56:44Z | ---
library_name: transformers
license: mit
base_model: gpt2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: js-fake-bach-epochs20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# js-fake-bach-epochs20
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5973
- Accuracy: 0.0033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006058454513356471
- train_batch_size: 16
- eval_batch_size: 32
- seed: 1
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 1.2427 | 1.2550 | 315 | 0.8253 | 0.0007 |
| 0.8106 | 2.5100 | 630 | 0.7777 | 0.0021 |
| 0.7663 | 3.7649 | 945 | 0.7449 | 0.0017 |
| 0.7263 | 5.0199 | 1260 | 0.6997 | 0.0027 |
| 0.689 | 6.2749 | 1575 | 0.6683 | 0.0018 |
| 0.6524 | 7.5299 | 1890 | 0.6396 | 0.0008 |
| 0.6158 | 8.7849 | 2205 | 0.6139 | 0.0021 |
| 0.5807 | 10.0398 | 2520 | 0.5981 | 0.0010 |
| 0.5437 | 11.2948 | 2835 | 0.5848 | 0.0030 |
| 0.5109 | 12.5498 | 3150 | 0.5841 | 0.0026 |
| 0.4781 | 13.8048 | 3465 | 0.5799 | 0.0028 |
| 0.4453 | 15.0598 | 3780 | 0.5867 | 0.0034 |
| 0.4169 | 16.3147 | 4095 | 0.5915 | 0.0034 |
| 0.3972 | 17.5697 | 4410 | 0.5968 | 0.0034 |
| 0.3847 | 18.8247 | 4725 | 0.5973 | 0.0033 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
mradermacher/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2-GGUF | mradermacher | 2025-01-10T14:13:33Z | 215 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:liminerity/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2",
"base_model:quantized:liminerity/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2",
"endpoints_compatible",
"region:us"
] | null | 2025-01-10T14:07:41Z | ---
base_model: liminerity/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/liminerity/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2-GGUF/resolve/main/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2-GGUF/resolve/main/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2-GGUF/resolve/main/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2-GGUF/resolve/main/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2-GGUF/resolve/main/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2-GGUF/resolve/main/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2-GGUF/resolve/main/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2.Q4_K_M.gguf) | Q4_K_M | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2-GGUF/resolve/main/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2.Q5_K_S.gguf) | Q5_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2-GGUF/resolve/main/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2.Q5_K_M.gguf) | Q5_K_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2-GGUF/resolve/main/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2-GGUF/resolve/main/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2-GGUF/resolve/main/Bitnet-Mistral.0.2-330m-v0.2-grokfast-v2.f16.gguf) | f16 | 0.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
John6666/luminarqmix-vpred-noobaixl-illustriousxl-merge-model-v10-sdxl | John6666 | 2025-01-10T14:13:21Z | 147 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"cute",
"merge",
"v-pred",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-Vpred-0.9r",
"base_model:merge:Laxhar/noobai-XL-Vpred-0.9r",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:merge:Laxhar/noobai-XL-Vpred-1.0",
"base_model:Raelina/Raehoshi-illust-XL-3",
"base_model:merge:Raelina/Raehoshi-illust-XL-3",
"base_model:advokat/IterComp_safetensors",
"base_model:merge:advokat/IterComp_safetensors",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-01-10T14:07:13Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- cute
- merge
- v-pred
- illustrious
base_model:
- Raelina/Raehoshi-illust-XL-3
- advokat/IterComp_safetensors
- Laxhar/noobai-XL-Vpred-1.0
- Laxhar/noobai-XL-Vpred-0.9r
---
Original model is [here](https://civitai.com/models/1125276/luminarqmix-vpred-noobaixl-illustrious-xl-merge-model?modelVersionId=1264783).
This model created by [hybskgks28275](https://civitai.com/user/hybskgks28275).
|
adammandic87/58839797-be6d-4d6b-87f1-788ee879110b | adammandic87 | 2025-01-10T14:13:03Z | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T14:02:05Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 58839797-be6d-4d6b-87f1-788ee879110b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e1e0ecf2dc3751fc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e1e0ecf2dc3751fc_train_data.json
type:
field_input: augmented_prompt
field_instruction: prompt
field_output: solution_1
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/58839797-be6d-4d6b-87f1-788ee879110b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/e1e0ecf2dc3751fc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9a1f3333-979a-4670-bc8c-562de485c372
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9a1f3333-979a-4670-bc8c-562de485c372
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 58839797-be6d-4d6b-87f1-788ee879110b
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 3 | nan |
| 0.0 | 0.0008 | 6 | nan |
| 0.0 | 0.0013 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hrasto/llamas2_tok_s1 | hrasto | 2025-01-10T14:12:45Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T13:15:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fbaldassarri/meta-llama_Llama-3.2-1B-auto_gptq-int8-gs128-asym | fbaldassarri | 2025-01-10T14:11:23Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autoround",
"auto-round",
"autogptq",
"gptq",
"auto-gptq",
"woq",
"meta",
"pytorch",
"llama-3",
"intel-autoround",
"intel",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | text-generation | 2025-01-08T19:53:40Z | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.2
library_name: transformers
tags:
- autoround
- auto-round
- autogptq
- gptq
- auto-gptq
- woq
- meta
- pytorch
- llama
- llama-3
- intel-autoround
- intel
model_name: Llama 3.2 1B
base_model: meta-llama/Llama-3.2-1B
inference: false
model_creator: meta-llama
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) using torch.float32 for quantization tuning.
- 8 bits (INT8)
- group size = 128
- Asymmetrical Quantization
- Method AutoGPTQ
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round)
Note: this INT8 version of Llama-3.2-1B has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.3.tar.gz
tar -xvzf v0.4.3.tar.gz
cd auto-round-0.4.3
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "meta-llama/Llama-3.2-1B"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 8, 128, False, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/meta-llama_Llama-3.2-1B-auto_gptq-int8-gs128-asym"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)
```
## License
[Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE)
## Disclaimer
This quantized model comes with no warrenty. It has been developed only for research purposes.
|
dimasik87/8b0a437c-6809-499a-a7e2-11cdd1c421dd | dimasik87 | 2025-01-10T14:11:19Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:llama3",
"region:us"
] | null | 2025-01-10T14:08:25Z | ---
library_name: peft
license: llama3
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8b0a437c-6809-499a-a7e2-11cdd1c421dd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 83ff291d83cb43e5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/83ff291d83cb43e5_train_data.json
type:
field_input: File Name
field_instruction: Code
field_output: Unit Test - (Ground Truth)
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dimasik87/8b0a437c-6809-499a-a7e2-11cdd1c421dd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/83ff291d83cb43e5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fd9fec32-14d9-40e2-b9df-4b29f156a315
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fd9fec32-14d9-40e2-b9df-4b29f156a315
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8b0a437c-6809-499a-a7e2-11cdd1c421dd
This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0034 | 1 | 0.8773 |
| 0.8386 | 0.0275 | 8 | 0.7904 |
| 0.6699 | 0.0549 | 16 | 0.7231 |
| 0.7453 | 0.0824 | 24 | 0.7004 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
quannh197/fd9fec32-14d9-40e2-b9df-4b29f156a315 | quannh197 | 2025-01-10T14:10:34Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:llama3",
"region:us"
] | null | 2025-01-10T14:08:14Z | ---
library_name: peft
license: llama3
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fd9fec32-14d9-40e2-b9df-4b29f156a315
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 83ff291d83cb43e5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/83ff291d83cb43e5_train_data.json
type:
field_input: File Name
field_instruction: Code
field_output: Unit Test - (Ground Truth)
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: quannh197/fd9fec32-14d9-40e2-b9df-4b29f156a315
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/83ff291d83cb43e5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fd9fec32-14d9-40e2-b9df-4b29f156a315
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fd9fec32-14d9-40e2-b9df-4b29f156a315
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# fd9fec32-14d9-40e2-b9df-4b29f156a315
This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9653 | 0.0034 | 1 | 0.9669 |
| 0.9214 | 0.0103 | 3 | 0.9642 |
| 1.0278 | 0.0206 | 6 | 0.9202 |
| 0.7053 | 0.0309 | 9 | 0.8822 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rsicproject/vit-GPT-SYDNEY-captioning | rsicproject | 2025-01-10T14:10:16Z | 40 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-01-10T14:07:56Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: vit-GPT-SYDNEY-captioning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-GPT-SYDNEY-captioning
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0945
- Rouge: 0.7166
- Bleu1: 0.7960
- Bleu2: 0.7224
- Bleu3: 0.6511
- Bleu4: 0.5862
- Meteor: 0.7406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1024
- num_epochs: 128
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge | Bleu1 | Bleu2 | Bleu3 | Bleu4 | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:------:|:------:|
| No log | 1.0 | 39 | 1.2499 | 0.4608 | 0.5480 | 0.4380 | 0.3577 | 0.2933 | 0.4268 |
| No log | 2.0 | 78 | 0.9391 | 0.4750 | 0.5410 | 0.4063 | 0.3224 | 0.2542 | 0.4806 |
| No log | 3.0 | 117 | 0.8546 | 0.6454 | 0.7483 | 0.6633 | 0.5737 | 0.4951 | 0.6413 |
| No log | 4.0 | 156 | 0.8292 | 0.6817 | 0.7628 | 0.6728 | 0.5796 | 0.4979 | 0.6846 |
| No log | 5.0 | 195 | 0.8240 | 0.6288 | 0.7029 | 0.5928 | 0.4938 | 0.4064 | 0.6683 |
| No log | 6.0 | 234 | 0.8186 | 0.6958 | 0.7772 | 0.6857 | 0.5913 | 0.5087 | 0.7089 |
| No log | 7.0 | 273 | 0.8367 | 0.6996 | 0.7677 | 0.6821 | 0.5899 | 0.5082 | 0.7045 |
| No log | 8.0 | 312 | 0.8558 | 0.6946 | 0.7738 | 0.6896 | 0.6018 | 0.5244 | 0.7076 |
| No log | 9.0 | 351 | 0.8639 | 0.6831 | 0.7587 | 0.6766 | 0.5881 | 0.5090 | 0.7084 |
| No log | 10.0 | 390 | 0.8834 | 0.6358 | 0.7702 | 0.6850 | 0.5969 | 0.5145 | 0.6678 |
| No log | 11.0 | 429 | 0.8819 | 0.7109 | 0.7876 | 0.7093 | 0.6356 | 0.5701 | 0.7405 |
| No log | 12.0 | 468 | 0.9616 | 0.6692 | 0.8127 | 0.7279 | 0.6379 | 0.5446 | 0.7055 |
| No log | 13.0 | 507 | 0.9424 | 0.6912 | 0.7668 | 0.6685 | 0.5776 | 0.4938 | 0.7441 |
| No log | 14.0 | 546 | 0.9606 | 0.6966 | 0.7938 | 0.7184 | 0.6436 | 0.5766 | 0.7199 |
| No log | 15.0 | 585 | 0.9306 | 0.7260 | 0.7895 | 0.7233 | 0.6621 | 0.6106 | 0.7658 |
| No log | 16.0 | 624 | 0.9969 | 0.7437 | 0.8241 | 0.7629 | 0.7013 | 0.6495 | 0.7718 |
| No log | 17.0 | 663 | 0.9749 | 0.7341 | 0.8082 | 0.7322 | 0.6519 | 0.5787 | 0.7481 |
| No log | 18.0 | 702 | 1.0044 | 0.7131 | 0.8 | 0.7271 | 0.6534 | 0.5849 | 0.7338 |
| No log | 19.0 | 741 | 0.9802 | 0.6500 | 0.7680 | 0.6814 | 0.6011 | 0.5212 | 0.7175 |
| No log | 20.0 | 780 | 1.0433 | 0.7352 | 0.8274 | 0.7519 | 0.6795 | 0.6138 | 0.7611 |
| No log | 21.0 | 819 | 1.0284 | 0.7063 | 0.7815 | 0.6989 | 0.6192 | 0.5517 | 0.7280 |
| No log | 22.0 | 858 | 1.0655 | 0.7263 | 0.7997 | 0.7199 | 0.6393 | 0.5702 | 0.7432 |
| No log | 23.0 | 897 | 1.0390 | 0.6922 | 0.7900 | 0.7131 | 0.6357 | 0.5658 | 0.7350 |
| No log | 24.0 | 936 | 1.1043 | 0.7324 | 0.7987 | 0.7184 | 0.6389 | 0.5679 | 0.7692 |
| No log | 25.0 | 975 | 1.0593 | 0.7221 | 0.8098 | 0.7309 | 0.6585 | 0.5907 | 0.7463 |
| No log | 26.0 | 1014 | 1.0945 | 0.7166 | 0.7960 | 0.7224 | 0.6511 | 0.5862 | 0.7406 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.20.3
|
sylvan54/git-base-bean | sylvan54 | 2025-01-10T14:09:51Z | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"git",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/git-base",
"base_model:finetune:microsoft/git-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-01-09T11:49:22Z | ---
library_name: transformers
license: mit
base_model: microsoft/git-base
tags:
- generated_from_trainer
model-index:
- name: git-base-bean
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-bean
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
PrunaAI/hrasto-llamas2_tok_s1-bnb-8bit-smashed | PrunaAI | 2025-01-10T14:08:19Z | 7 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"base_model:hrasto/llamas2_tok_s1",
"base_model:quantized:hrasto/llamas2_tok_s1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T14:08:12Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: hrasto/llamas2_tok_s1
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo hrasto/llamas2_tok_s1 installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/hrasto-llamas2_tok_s1-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("hrasto/llamas2_tok_s1")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model hrasto/llamas2_tok_s1 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html). |
lesso06/f426e9cc-b1e1-42c0-a6d4-4c9968be284e | lesso06 | 2025-01-10T14:02:06Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"region:us"
] | null | 2025-01-10T13:24:24Z | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f426e9cc-b1e1-42c0-a6d4-4c9968be284e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: true
chat_template: llama3
datasets:
- data_files:
- 26f032e89bdce086_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/26f032e89bdce086_train_data.json
type:
field_input: context
field_instruction: question
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso06/f426e9cc-b1e1-42c0-a6d4-4c9968be284e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 77GiB
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/26f032e89bdce086_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 46b255fe-df74-4e45-b78e-f8a45fd7c90c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 46b255fe-df74-4e45-b78e-f8a45fd7c90c
warmup_steps: 10
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# f426e9cc-b1e1-42c0-a6d4-4c9968be284e
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3191 | 0.0008 | 1 | 2.7301 |
| 1.6307 | 0.0071 | 9 | 1.7010 |
| 1.0874 | 0.0142 | 18 | 1.1264 |
| 0.9531 | 0.0213 | 27 | 0.9928 |
| 1.2836 | 0.0284 | 36 | 0.9506 |
| 1.1158 | 0.0355 | 45 | 0.9314 |
| 0.9932 | 0.0426 | 54 | 0.9029 |
| 0.4571 | 0.0497 | 63 | 0.8926 |
| 0.9113 | 0.0568 | 72 | 0.8765 |
| 0.8712 | 0.0640 | 81 | 0.8691 |
| 1.4005 | 0.0711 | 90 | 0.8655 |
| 0.9174 | 0.0782 | 99 | 0.8644 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nttx/7d22d2cc-a089-4ea9-b0e1-28f60fc292b6 | nttx | 2025-01-10T13:59:07Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T13:50:20Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7d22d2cc-a089-4ea9-b0e1-28f60fc292b6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e155d9abe53506a9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e155d9abe53506a9_train_data.json
type:
field_instruction: query
field_output: answers
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: nttx/7d22d2cc-a089-4ea9-b0e1-28f60fc292b6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e155d9abe53506a9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 266eb7bb-b56f-4a51-aba3-cf10ca2672ff
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 266eb7bb-b56f-4a51-aba3-cf10ca2672ff
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7d22d2cc-a089-4ea9-b0e1-28f60fc292b6
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.1580 |
| 0.4634 | 0.0070 | 50 | 0.5326 |
| 0.3632 | 0.0141 | 100 | 0.4659 |
| 0.4364 | 0.0211 | 150 | 0.4417 |
| 0.4343 | 0.0282 | 200 | 0.4384 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/M7Yamshadowexperiment28_Inex12Neural-GGUF | mradermacher | 2025-01-10T13:58:55Z | 218 | 0 | transformers | [
"transformers",
"gguf",
"Safetensors",
"text-generation-inference",
"merge",
"en",
"base_model:MaziyarPanahi/M7Yamshadowexperiment28_Inex12Neural",
"base_model:quantized:MaziyarPanahi/M7Yamshadowexperiment28_Inex12Neural",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-10T13:42:37Z | ---
base_model: MaziyarPanahi/M7Yamshadowexperiment28_Inex12Neural
language:
- en
library_name: transformers
license: apache-2.0
model_creator: MaziyarPanahi
model_name: M7Yamshadowexperiment28_Inex12Neural
quantized_by: mradermacher
tags:
- Safetensors
- text-generation-inference
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MaziyarPanahi/M7Yamshadowexperiment28_Inex12Neural
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/M7Yamshadowexperiment28_Inex12Neural-GGUF/resolve/main/M7Yamshadowexperiment28_Inex12Neural.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/M7Yamshadowexperiment28_Inex12Neural-GGUF/resolve/main/M7Yamshadowexperiment28_Inex12Neural.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/M7Yamshadowexperiment28_Inex12Neural-GGUF/resolve/main/M7Yamshadowexperiment28_Inex12Neural.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/M7Yamshadowexperiment28_Inex12Neural-GGUF/resolve/main/M7Yamshadowexperiment28_Inex12Neural.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/M7Yamshadowexperiment28_Inex12Neural-GGUF/resolve/main/M7Yamshadowexperiment28_Inex12Neural.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/M7Yamshadowexperiment28_Inex12Neural-GGUF/resolve/main/M7Yamshadowexperiment28_Inex12Neural.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/M7Yamshadowexperiment28_Inex12Neural-GGUF/resolve/main/M7Yamshadowexperiment28_Inex12Neural.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/M7Yamshadowexperiment28_Inex12Neural-GGUF/resolve/main/M7Yamshadowexperiment28_Inex12Neural.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/M7Yamshadowexperiment28_Inex12Neural-GGUF/resolve/main/M7Yamshadowexperiment28_Inex12Neural.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/M7Yamshadowexperiment28_Inex12Neural-GGUF/resolve/main/M7Yamshadowexperiment28_Inex12Neural.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/M7Yamshadowexperiment28_Inex12Neural-GGUF/resolve/main/M7Yamshadowexperiment28_Inex12Neural.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/M7Yamshadowexperiment28_Inex12Neural-GGUF/resolve/main/M7Yamshadowexperiment28_Inex12Neural.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/MeliodasT3qm7_M7Yamshadowexperiment28-GGUF | mradermacher | 2025-01-10T13:58:54Z | 254 | 0 | transformers | [
"transformers",
"gguf",
"Safetensors",
"text-generation-inference",
"merge",
"en",
"base_model:MaziyarPanahi/MeliodasT3qm7_M7Yamshadowexperiment28",
"base_model:quantized:MaziyarPanahi/MeliodasT3qm7_M7Yamshadowexperiment28",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-10T13:02:52Z | ---
base_model: MaziyarPanahi/MeliodasT3qm7_M7Yamshadowexperiment28
language:
- en
library_name: transformers
license: apache-2.0
model_creator: MaziyarPanahi
model_name: MeliodasT3qm7_M7Yamshadowexperiment28
quantized_by: mradermacher
tags:
- Safetensors
- text-generation-inference
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MaziyarPanahi/MeliodasT3qm7_M7Yamshadowexperiment28
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7_M7Yamshadowexperiment28-GGUF/resolve/main/MeliodasT3qm7_M7Yamshadowexperiment28.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7_M7Yamshadowexperiment28-GGUF/resolve/main/MeliodasT3qm7_M7Yamshadowexperiment28.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7_M7Yamshadowexperiment28-GGUF/resolve/main/MeliodasT3qm7_M7Yamshadowexperiment28.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7_M7Yamshadowexperiment28-GGUF/resolve/main/MeliodasT3qm7_M7Yamshadowexperiment28.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7_M7Yamshadowexperiment28-GGUF/resolve/main/MeliodasT3qm7_M7Yamshadowexperiment28.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7_M7Yamshadowexperiment28-GGUF/resolve/main/MeliodasT3qm7_M7Yamshadowexperiment28.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7_M7Yamshadowexperiment28-GGUF/resolve/main/MeliodasT3qm7_M7Yamshadowexperiment28.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7_M7Yamshadowexperiment28-GGUF/resolve/main/MeliodasT3qm7_M7Yamshadowexperiment28.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7_M7Yamshadowexperiment28-GGUF/resolve/main/MeliodasT3qm7_M7Yamshadowexperiment28.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7_M7Yamshadowexperiment28-GGUF/resolve/main/MeliodasT3qm7_M7Yamshadowexperiment28.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7_M7Yamshadowexperiment28-GGUF/resolve/main/MeliodasT3qm7_M7Yamshadowexperiment28.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MeliodasT3qm7_M7Yamshadowexperiment28-GGUF/resolve/main/MeliodasT3qm7_M7Yamshadowexperiment28.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
StefaniaCri/mbart_romainian_to_emoji_translated | StefaniaCri | 2025-01-10T13:58:18Z | 32 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-01-10T13:56:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
willtensora/32080d72-25f3-4650-b173-b6dd52e12801 | willtensora | 2025-01-10T13:58:06Z | 5 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:peft-internal-testing/tiny-dummy-qwen2",
"base_model:adapter:peft-internal-testing/tiny-dummy-qwen2",
"region:us"
] | null | 2025-01-10T13:57:37Z | ---
library_name: peft
base_model: peft-internal-testing/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 32080d72-25f3-4650-b173-b6dd52e12801
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: peft-internal-testing/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- format: custom
path: argilla/databricks-dolly-15k-curated-en
type:
field_input: original-instruction
field_instruction: original-instruction
field_output: original-response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: willtensora/32080d72-25f3-4650-b173-b6dd52e12801
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: argilla/databricks-dolly-15k-curated-en
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 00000000-0000-0000-0000-000000000000
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 00000000-0000-0000-0000-000000000000
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 32080d72-25f3-4650-b173-b6dd52e12801
This model is a fine-tuned version of [peft-internal-testing/tiny-dummy-qwen2](https://huggingface.co/peft-internal-testing/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9315 | 0.0006 | 1 | 11.9313 |
| 11.9319 | 0.0017 | 3 | 11.9313 |
| 11.926 | 0.0034 | 6 | 11.9313 |
| 11.9287 | 0.0050 | 9 | 11.9313 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Daemontatox/AetherUncensored | Daemontatox | 2025-01-10T13:57:49Z | 59 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:cognitivecomputations/Dolphin3.0-Llama3.1-8B",
"base_model:finetune:cognitivecomputations/Dolphin3.0-Llama3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-09T22:42:06Z | ---
base_model: cognitivecomputations/Dolphin3.0-Llama3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# Uploaded model
- **Developed by:** Daemontatox
- **License:** apache-2.0
- **Finetuned from model :** cognitivecomputations/Dolphin3.0-Llama3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
trenden/0b4b7c78-41cf-4c59-b5a8-145297e61213 | trenden | 2025-01-10T13:54:45Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T13:50:31Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0b4b7c78-41cf-4c59-b5a8-145297e61213
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-0.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e155d9abe53506a9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e155d9abe53506a9_train_data.json
type:
field_instruction: query
field_output: answers
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/0b4b7c78-41cf-4c59-b5a8-145297e61213
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/e155d9abe53506a9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 266eb7bb-b56f-4a51-aba3-cf10ca2672ff
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 266eb7bb-b56f-4a51-aba3-cf10ca2672ff
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0b4b7c78-41cf-4c59-b5a8-145297e61213
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 3 | nan |
| 0.0 | 0.0008 | 6 | nan |
| 0.0 | 0.0013 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/llama-3-8b-tune-folio-GGUF | mradermacher | 2025-01-10T13:54:14Z | 331 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TongZheng1999/llama-3-8b-tune-folio",
"base_model:quantized:TongZheng1999/llama-3-8b-tune-folio",
"endpoints_compatible",
"region:us"
] | null | 2025-01-10T13:24:08Z | ---
base_model: TongZheng1999/llama-3-8b-tune-folio
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TongZheng1999/llama-3-8b-tune-folio
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-tune-folio-GGUF/resolve/main/llama-3-8b-tune-folio.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-tune-folio-GGUF/resolve/main/llama-3-8b-tune-folio.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-tune-folio-GGUF/resolve/main/llama-3-8b-tune-folio.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-tune-folio-GGUF/resolve/main/llama-3-8b-tune-folio.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-tune-folio-GGUF/resolve/main/llama-3-8b-tune-folio.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-tune-folio-GGUF/resolve/main/llama-3-8b-tune-folio.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-tune-folio-GGUF/resolve/main/llama-3-8b-tune-folio.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-tune-folio-GGUF/resolve/main/llama-3-8b-tune-folio.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-tune-folio-GGUF/resolve/main/llama-3-8b-tune-folio.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-tune-folio-GGUF/resolve/main/llama-3-8b-tune-folio.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-tune-folio-GGUF/resolve/main/llama-3-8b-tune-folio.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-8b-tune-folio-GGUF/resolve/main/llama-3-8b-tune-folio.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ktdsbaseLM-v0.15-onbased-llama3.1-GGUF | mradermacher | 2025-01-10T13:48:41Z | 325 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:AIDX-ktds/ktdsbaseLM-v0.15-onbased-llama3.1",
"base_model:quantized:AIDX-ktds/ktdsbaseLM-v0.15-onbased-llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-10T12:19:27Z | ---
base_model: AIDX-ktds/ktdsbaseLM-v0.15-onbased-llama3.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AIDX-ktds/ktdsbaseLM-v0.15-onbased-llama3.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.15-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.15-onbased-llama3.1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.15-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.15-onbased-llama3.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.15-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.15-onbased-llama3.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.15-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.15-onbased-llama3.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.15-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.15-onbased-llama3.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.15-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.15-onbased-llama3.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.15-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.15-onbased-llama3.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.15-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.15-onbased-llama3.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.15-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.15-onbased-llama3.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.15-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.15-onbased-llama3.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.15-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.15-onbased-llama3.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.15-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.15-onbased-llama3.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Gemma-2B-Hinglish-LORA-v1.0-GGUF | mradermacher | 2025-01-10T13:48:41Z | 301 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"unsloth",
"gemma",
"trl",
"en",
"hi",
"dataset:yahma/alpaca-cleaned",
"dataset:ravithejads/samvaad-hi-filtered",
"dataset:HydraIndicLM/hindi_alpaca_dolly_67k",
"base_model:kirankunapuli/Gemma-2B-Hinglish-LORA-v1.0",
"base_model:quantized:kirankunapuli/Gemma-2B-Hinglish-LORA-v1.0",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-01-10T13:37:12Z | ---
base_model: kirankunapuli/Gemma-2B-Hinglish-LORA-v1.0
datasets:
- yahma/alpaca-cleaned
- ravithejads/samvaad-hi-filtered
- HydraIndicLM/hindi_alpaca_dolly_67k
language:
- en
- hi
library_name: transformers
license: gemma
quantized_by: mradermacher
tags:
- text-generation
- transformers
- unsloth
- gemma
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kirankunapuli/Gemma-2B-Hinglish-LORA-v1.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-2B-Hinglish-LORA-v1.0-GGUF/resolve/main/Gemma-2B-Hinglish-LORA-v1.0.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2B-Hinglish-LORA-v1.0-GGUF/resolve/main/Gemma-2B-Hinglish-LORA-v1.0.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2B-Hinglish-LORA-v1.0-GGUF/resolve/main/Gemma-2B-Hinglish-LORA-v1.0.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2B-Hinglish-LORA-v1.0-GGUF/resolve/main/Gemma-2B-Hinglish-LORA-v1.0.Q3_K_L.gguf) | Q3_K_L | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2B-Hinglish-LORA-v1.0-GGUF/resolve/main/Gemma-2B-Hinglish-LORA-v1.0.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2B-Hinglish-LORA-v1.0-GGUF/resolve/main/Gemma-2B-Hinglish-LORA-v1.0.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2B-Hinglish-LORA-v1.0-GGUF/resolve/main/Gemma-2B-Hinglish-LORA-v1.0.Q4_K_M.gguf) | Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2B-Hinglish-LORA-v1.0-GGUF/resolve/main/Gemma-2B-Hinglish-LORA-v1.0.Q5_K_S.gguf) | Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2B-Hinglish-LORA-v1.0-GGUF/resolve/main/Gemma-2B-Hinglish-LORA-v1.0.Q5_K_M.gguf) | Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2B-Hinglish-LORA-v1.0-GGUF/resolve/main/Gemma-2B-Hinglish-LORA-v1.0.Q6_K.gguf) | Q6_K | 2.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2B-Hinglish-LORA-v1.0-GGUF/resolve/main/Gemma-2B-Hinglish-LORA-v1.0.Q8_0.gguf) | Q8_0 | 2.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2B-Hinglish-LORA-v1.0-GGUF/resolve/main/Gemma-2B-Hinglish-LORA-v1.0.f16.gguf) | f16 | 5.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lesso04/f4c8bfb5-32f7-46ad-a383-325c0c615d05 | lesso04 | 2025-01-10T13:47:34Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"region:us"
] | null | 2025-01-10T13:36:33Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f4c8bfb5-32f7-46ad-a383-325c0c615d05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: true
chat_template: llama3
datasets:
- data_files:
- fb9adc4d4987f24a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb9adc4d4987f24a_train_data.json
type:
field_input: section_text
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso04/f4c8bfb5-32f7-46ad-a383-325c0c615d05
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 77GiB
max_steps: 50
micro_batch_size: 8
mlflow_experiment_name: /tmp/fb9adc4d4987f24a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66da8cc4-c8c2-4305-a347-61b464528b61
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66da8cc4-c8c2-4305-a347-61b464528b61
warmup_steps: 10
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# f4c8bfb5-32f7-46ad-a383-325c0c615d05
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0030 | 1 | nan |
| 0.0 | 0.0149 | 5 | nan |
| 0.0 | 0.0297 | 10 | nan |
| 0.0 | 0.0446 | 15 | nan |
| 0.0 | 0.0594 | 20 | nan |
| 0.0 | 0.0743 | 25 | nan |
| 0.0 | 0.0892 | 30 | nan |
| 0.0 | 0.1040 | 35 | nan |
| 0.0 | 0.1189 | 40 | nan |
| 0.0 | 0.1337 | 45 | nan |
| 0.0 | 0.1486 | 50 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso05/530799de-0d58-401f-b489-6803bff65c90 | lesso05 | 2025-01-10T13:47:34Z | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-01-10T13:28:19Z | ---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 530799de-0d58-401f-b489-6803bff65c90
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: true
chat_template: llama3
datasets:
- data_files:
- b88155c385ea165a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b88155c385ea165a_train_data.json
type:
field_instruction: question
field_output: reponses
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso05/530799de-0d58-401f-b489-6803bff65c90
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 2.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 77GiB
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/b88155c385ea165a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 92a1cd3e-471f-481a-aa73-6c496dcf52e3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 92a1cd3e-471f-481a-aa73-6c496dcf52e3
warmup_steps: 10
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# 530799de-0d58-401f-b489-6803bff65c90
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3766 | 0.0001 | 1 | 10.3783 |
| 10.3782 | 0.0006 | 9 | 10.3782 |
| 10.3765 | 0.0012 | 18 | 10.3781 |
| 10.3808 | 0.0017 | 27 | 10.3780 |
| 10.3776 | 0.0023 | 36 | 10.3779 |
| 10.3777 | 0.0029 | 45 | 10.3778 |
| 10.3748 | 0.0035 | 54 | 10.3777 |
| 10.3784 | 0.0041 | 63 | 10.3777 |
| 10.3771 | 0.0046 | 72 | 10.3777 |
| 10.3768 | 0.0052 | 81 | 10.3776 |
| 10.3775 | 0.0058 | 90 | 10.3776 |
| 10.3775 | 0.0064 | 99 | 10.3776 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/phi-4-abliterated-Q4_K_M-GGUF | Triangle104 | 2025-01-10T13:46:09Z | 50 | 0 | transformers | [
"transformers",
"gguf",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/phi-4-abliterated",
"base_model:quantized:huihui-ai/phi-4-abliterated",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T13:45:29Z | ---
license: mit
license_link: https://huggingface.co/huihui-ai/phi-4-abliterated/resolve/main/LICENSE
language:
- en
base_model: huihui-ai/phi-4-abliterated
pipeline_tag: text-generation
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0
widget:
- messages:
- role: user
content: How should I explain the Internet?
library_name: transformers
---
# Triangle104/phi-4-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/phi-4-abliterated`](https://huggingface.co/huihui-ai/phi-4-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/phi-4-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/phi-4-abliterated-Q4_K_M-GGUF --hf-file phi-4-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/phi-4-abliterated-Q4_K_M-GGUF --hf-file phi-4-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/phi-4-abliterated-Q4_K_M-GGUF --hf-file phi-4-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/phi-4-abliterated-Q4_K_M-GGUF --hf-file phi-4-abliterated-q4_k_m.gguf -c 2048
```
|
shopitalic/serene-ultraplush-towel-clay-rafael | shopitalic | 2025-01-10T13:46:09Z | 171 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-10T13:46:03Z | ---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# serene ultraplush towel clay rafael
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/shopitalic/serene-ultraplush-towel-clay-rafael/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
Triangle104/Thoughtful-Llama-RP-3b | Triangle104 | 2025-01-10T13:46:08Z | 26 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:bunnycore/Llama-3.2-3B-Pure-RP",
"base_model:merge:bunnycore/Llama-3.2-3B-Pure-RP",
"base_model:prithivMLmods/Llama-Deepsync-3B",
"base_model:merge:prithivMLmods/Llama-Deepsync-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T13:44:04Z | ---
base_model:
- bunnycore/Llama-3.2-3B-Pure-RP
- prithivMLmods/Llama-Deepsync-3B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [bunnycore/Llama-3.2-3B-Pure-RP](https://huggingface.co/bunnycore/Llama-3.2-3B-Pure-RP)
* [prithivMLmods/Llama-Deepsync-3B](https://huggingface.co/prithivMLmods/Llama-Deepsync-3B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: bunnycore/Llama-3.2-3B-Pure-RP
- model: prithivMLmods/Llama-Deepsync-3B
merge_method: slerp
base_model: bunnycore/Llama-3.2-3B-Pure-RP
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0]
```
|
VERSIL91/66da8cc4-c8c2-4305-a347-61b464528b61 | VERSIL91 | 2025-01-10T13:45:49Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"region:us"
] | null | 2025-01-10T13:36:32Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 66da8cc4-c8c2-4305-a347-61b464528b61
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb9adc4d4987f24a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb9adc4d4987f24a_train_data.json
type:
field_input: section_text
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/66da8cc4-c8c2-4305-a347-61b464528b61
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/fb9adc4d4987f24a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66da8cc4-c8c2-4305-a347-61b464528b61
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66da8cc4-c8c2-4305-a347-61b464528b61
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 66da8cc4-c8c2-4305-a347-61b464528b61
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0059 | 1 | nan |
| 0.0 | 0.0297 | 5 | nan |
| 0.0 | 0.0594 | 10 | nan |
| 0.0 | 0.0892 | 15 | nan |
| 0.0 | 0.1189 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sergioalves/9c157d4b-6459-4383-9e4a-4b86b64163ac | sergioalves | 2025-01-10T13:45:10Z | 17 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B",
"base_model:adapter:unsloth/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-01-10T13:43:16Z | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9c157d4b-6459-4383-9e4a-4b86b64163ac
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d01b5c4ce07f41a0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d01b5c4ce07f41a0_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: sergioalves/9c157d4b-6459-4383-9e4a-4b86b64163ac
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/d01b5c4ce07f41a0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fd97bc97-7fab-49d5-8ed7-01af218a5056
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fd97bc97-7fab-49d5-8ed7-01af218a5056
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9c157d4b-6459-4383-9e4a-4b86b64163ac
This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | nan |
| 0.0 | 0.0073 | 8 | nan |
| 0.0 | 0.0146 | 16 | nan |
| 0.0 | 0.0219 | 24 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hongngo/b2ccb032-a770-4137-9247-237fccd64926 | hongngo | 2025-01-10T13:44:49Z | 12 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T12:55:13Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b2ccb032-a770-4137-9247-237fccd64926
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Mistral-Nemo-Instruct-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7abb359048f38070_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7abb359048f38070_train_data.json
type:
field_input: tokens
field_instruction: intent
field_output: utterance
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/b2ccb032-a770-4137-9247-237fccd64926
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/7abb359048f38070_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: abe44fa8-66e0-4595-8c92-af54fe3a57fe
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: abe44fa8-66e0-4595-8c92-af54fe3a57fe
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b2ccb032-a770-4137-9247-237fccd64926
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2282 | 0.0162 | 200 | 0.0326 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nhung03/8929acac-0e3e-40e5-a5c1-748642dbca8d | nhung03 | 2025-01-10T13:44:31Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T13:24:25Z | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8929acac-0e3e-40e5-a5c1-748642dbca8d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 26f032e89bdce086_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/26f032e89bdce086_train_data.json
type:
field_input: context
field_instruction: question
field_output: title
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/8929acac-0e3e-40e5-a5c1-748642dbca8d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/26f032e89bdce086_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 46b255fe-df74-4e45-b78e-f8a45fd7c90c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 46b255fe-df74-4e45-b78e-f8a45fd7c90c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8929acac-0e3e-40e5-a5c1-748642dbca8d
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0212 | 0.0790 | 200 | 0.8807 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Legalaz/23_llambo2_08_37 | Legalaz | 2025-01-10T13:43:44Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T13:39:58Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# top
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* /root/top1
* /root/top2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /root/top2
parameters:
weight: 0.9342
- model: /root/top1
parameters:
weight: 0.0628
merge_method: linear
dtype: bfloat16
```
|
lesso03/b4d6f169-f3bf-4142-8ee3-b3072ad1912f | lesso03 | 2025-01-10T13:43:05Z | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-llama-2-7b",
"base_model:adapter:NousResearch/Nous-Hermes-llama-2-7b",
"license:mit",
"region:us"
] | null | 2025-01-10T13:36:32Z | ---
library_name: peft
license: mit
base_model: NousResearch/Nous-Hermes-llama-2-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b4d6f169-f3bf-4142-8ee3-b3072ad1912f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-llama-2-7b
bf16: true
chat_template: llama3
datasets:
- data_files:
- fb9adc4d4987f24a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb9adc4d4987f24a_train_data.json
type:
field_input: section_text
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso03/b4d6f169-f3bf-4142-8ee3-b3072ad1912f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 1.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/fb9adc4d4987f24a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 20
save_strategy: steps
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66da8cc4-c8c2-4305-a347-61b464528b61
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66da8cc4-c8c2-4305-a347-61b464528b61
warmup_steps: 5
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# b4d6f169-f3bf-4142-8ee3-b3072ad1912f
This model is a fine-tuned version of [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0015 | 1 | nan |
| 0.0 | 0.0059 | 4 | nan |
| 0.0 | 0.0119 | 8 | nan |
| 0.0 | 0.0178 | 12 | nan |
| 0.0 | 0.0238 | 16 | nan |
| 0.0 | 0.0297 | 20 | nan |
| 0.0 | 0.0357 | 24 | nan |
| 0.0 | 0.0416 | 28 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
denbeo/c2f91a76-1176-488a-8877-051610673970 | denbeo | 2025-01-10T13:41:28Z | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-10T12:53:45Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c2f91a76-1176-488a-8877-051610673970
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d1da17a435e62cf7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d1da17a435e62cf7_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: denbeo/c2f91a76-1176-488a-8877-051610673970
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d1da17a435e62cf7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 31dc5ae8-56c1-4c69-91e0-8935644d0a3c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 31dc5ae8-56c1-4c69-91e0-8935644d0a3c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c2f91a76-1176-488a-8877-051610673970
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7332 | 0.4444 | 200 | 0.2935 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/phi-4-abliterated-Q4_K_S-GGUF | Triangle104 | 2025-01-10T13:35:45Z | 45 | 0 | transformers | [
"transformers",
"gguf",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/phi-4-abliterated",
"base_model:quantized:huihui-ai/phi-4-abliterated",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-10T13:35:09Z | ---
license: mit
license_link: https://huggingface.co/huihui-ai/phi-4-abliterated/resolve/main/LICENSE
language:
- en
base_model: huihui-ai/phi-4-abliterated
pipeline_tag: text-generation
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0
widget:
- messages:
- role: user
content: How should I explain the Internet?
library_name: transformers
---
# Triangle104/phi-4-abliterated-Q4_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/phi-4-abliterated`](https://huggingface.co/huihui-ai/phi-4-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/phi-4-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/phi-4-abliterated-Q4_K_S-GGUF --hf-file phi-4-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/phi-4-abliterated-Q4_K_S-GGUF --hf-file phi-4-abliterated-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/phi-4-abliterated-Q4_K_S-GGUF --hf-file phi-4-abliterated-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/phi-4-abliterated-Q4_K_S-GGUF --hf-file phi-4-abliterated-q4_k_s.gguf -c 2048
```
|
kostiantynk-out/1312c246-d739-414f-b0f1-6fb9bc1cb25a | kostiantynk-out | 2025-01-10T13:29:07Z | 13 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T13:27:27Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1312c246-d739-414f-b0f1-6fb9bc1cb25a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bb469122edf6ea7b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bb469122edf6ea7b_train_data.json
type:
field_input: ''
field_instruction: input_persona
field_output: prompt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk-out/1312c246-d739-414f-b0f1-6fb9bc1cb25a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/bb469122edf6ea7b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c4ff25be-7ef1-40cb-a8cb-e0bc2a097844
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c4ff25be-7ef1-40cb-a8cb-e0bc2a097844
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1312c246-d739-414f-b0f1-6fb9bc1cb25a
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7296 | 0.0004 | 1 | 1.7418 |
| 1.7598 | 0.0013 | 3 | 1.7384 |
| 1.7149 | 0.0026 | 6 | 1.6993 |
| 1.6058 | 0.0038 | 9 | 1.5814 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nttx/cc312658-ef11-4ab6-b767-863f240f4c02 | nttx | 2025-01-10T13:27:05Z | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T13:23:19Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cc312658-ef11-4ab6-b767-863f240f4c02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bb469122edf6ea7b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bb469122edf6ea7b_train_data.json
type:
field_input: ''
field_instruction: input_persona
field_output: prompt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: nttx/cc312658-ef11-4ab6-b767-863f240f4c02
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/bb469122edf6ea7b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c4ff25be-7ef1-40cb-a8cb-e0bc2a097844
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c4ff25be-7ef1-40cb-a8cb-e0bc2a097844
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cc312658-ef11-4ab6-b767-863f240f4c02
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 1.7478 |
| 1.1898 | 0.0213 | 50 | 1.1920 |
| 1.0931 | 0.0425 | 100 | 1.1058 |
| 1.0317 | 0.0638 | 150 | 1.0779 |
| 1.062 | 0.0851 | 200 | 1.0721 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/ktdsbaseLM-v0.16-onbased-llama3.1-GGUF | mradermacher | 2025-01-10T13:23:52Z | 364 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:AIDX-ktds/ktdsbaseLM-v0.16-onbased-llama3.1",
"base_model:quantized:AIDX-ktds/ktdsbaseLM-v0.16-onbased-llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-10T12:12:27Z | ---
base_model: AIDX-ktds/ktdsbaseLM-v0.16-onbased-llama3.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AIDX-ktds/ktdsbaseLM-v0.16-onbased-llama3.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.16-onbased-llama3.1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.16-onbased-llama3.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.16-onbased-llama3.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.16-onbased-llama3.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.16-onbased-llama3.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.16-onbased-llama3.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.16-onbased-llama3.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.16-onbased-llama3.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.16-onbased-llama3.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.16-onbased-llama3.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.16-onbased-llama3.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ktdsbaseLM-v0.16-onbased-llama3.1-GGUF/resolve/main/ktdsbaseLM-v0.16-onbased-llama3.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Shawon16/VideoMAE_BdSLW60_FrameRate_NOT_Corrected_with_Augment_20_epoch_RQ | Shawon16 | 2025-01-10T13:21:31Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-01-09T17:47:01Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: VideoMAE_BdSLW60_FrameRate_NOT_Corrected_with_Augment_20_epoch_RQ
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VideoMAE_BdSLW60_FrameRate_NOT_Corrected_with_Augment_20_epoch_RQ
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0470
- Accuracy: 0.9906
- Precision: 0.9913
- Recall: 0.9906
- F1: 0.9906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 17940
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 11.7722 | 0.05 | 897 | 2.3329 | 0.4482 | 0.4616 | 0.4482 | 0.3915 |
| 2.7045 | 1.0500 | 1795 | 0.6715 | 0.8471 | 0.8870 | 0.8471 | 0.8384 |
| 0.7855 | 2.0500 | 2693 | 0.2378 | 0.9412 | 0.9474 | 0.9412 | 0.9401 |
| 0.5503 | 3.0500 | 3591 | 0.1367 | 0.9635 | 0.9686 | 0.9635 | 0.9635 |
| 0.2537 | 4.05 | 4488 | 0.1621 | 0.9612 | 0.9658 | 0.9612 | 0.9608 |
| 0.2549 | 5.0500 | 5386 | 0.1229 | 0.9765 | 0.9789 | 0.9765 | 0.9761 |
| 0.3236 | 6.0500 | 6284 | 0.0916 | 0.9765 | 0.9799 | 0.9765 | 0.9763 |
| 0.2078 | 7.0500 | 7182 | 0.1703 | 0.96 | 0.9647 | 0.96 | 0.9600 |
| 0.1967 | 8.05 | 8079 | 0.1708 | 0.9706 | 0.9731 | 0.9706 | 0.9707 |
| 0.2457 | 9.0500 | 8977 | 0.1500 | 0.9718 | 0.9772 | 0.9718 | 0.9716 |
| 0.0204 | 10.0500 | 9875 | 0.1181 | 0.9812 | 0.9833 | 0.9812 | 0.9811 |
| 0.0753 | 11.0500 | 10773 | 0.1418 | 0.9753 | 0.9775 | 0.9753 | 0.9755 |
| 0.0568 | 12.05 | 11670 | 0.1563 | 0.9765 | 0.9791 | 0.9765 | 0.9763 |
| 0.0851 | 13.0500 | 12568 | 0.0903 | 0.9847 | 0.9856 | 0.9847 | 0.9846 |
| 0.0106 | 14.0500 | 13466 | 0.0935 | 0.9871 | 0.9881 | 0.9871 | 0.9869 |
| 0.0171 | 15.0500 | 14364 | 0.0429 | 0.9929 | 0.9934 | 0.9929 | 0.9929 |
| 0.0025 | 16.05 | 15261 | 0.0584 | 0.9882 | 0.9890 | 0.9882 | 0.9882 |
| 0.0006 | 17.0500 | 16159 | 0.0693 | 0.9882 | 0.9894 | 0.9882 | 0.9883 |
| 0.0001 | 18.0500 | 17057 | 0.0513 | 0.9906 | 0.9913 | 0.9906 | 0.9906 |
| 0.0001 | 19.0492 | 17940 | 0.0470 | 0.9906 | 0.9913 | 0.9906 | 0.9906 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.1
|
lesso05/f0b09731-237d-4066-bcb5-5550e721046b | lesso05 | 2025-01-10T13:21:17Z | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T12:53:31Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f0b09731-237d-4066-bcb5-5550e721046b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
bf16: true
chat_template: llama3
datasets:
- data_files:
- d1da17a435e62cf7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d1da17a435e62cf7_train_data.json
type:
field_instruction: inputs
field_output: targets
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso05/f0b09731-237d-4066-bcb5-5550e721046b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 2.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 77GiB
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/d1da17a435e62cf7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 31dc5ae8-56c1-4c69-91e0-8935644d0a3c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 31dc5ae8-56c1-4c69-91e0-8935644d0a3c
warmup_steps: 10
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# f0b09731-237d-4066-bcb5-5550e721046b
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0044 | 1 | nan |
| 0.0 | 0.04 | 9 | nan |
| 0.0 | 0.08 | 18 | nan |
| 0.0 | 0.12 | 27 | nan |
| 0.0 | 0.16 | 36 | nan |
| 0.0 | 0.2 | 45 | nan |
| 0.0 | 0.24 | 54 | nan |
| 0.0 | 0.28 | 63 | nan |
| 0.0 | 0.32 | 72 | nan |
| 0.0 | 0.36 | 81 | nan |
| 0.0 | 0.4 | 90 | nan |
| 0.0 | 0.44 | 99 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
VERSIL91/abe44fa8-66e0-4595-8c92-af54fe3a57fe | VERSIL91 | 2025-01-10T13:19:57Z | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T12:54:47Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: abe44fa8-66e0-4595-8c92-af54fe3a57fe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: unsloth/Mistral-Nemo-Instruct-2407
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7abb359048f38070_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7abb359048f38070_train_data.json
type:
field_input: tokens
field_instruction: intent
field_output: utterance
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/abe44fa8-66e0-4595-8c92-af54fe3a57fe
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/7abb359048f38070_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: abe44fa8-66e0-4595-8c92-af54fe3a57fe
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: abe44fa8-66e0-4595-8c92-af54fe3a57fe
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# abe44fa8-66e0-4595-8c92-af54fe3a57fe
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0003 | 1 | nan |
| 0.0 | 0.0016 | 5 | nan |
| 0.0 | 0.0032 | 10 | nan |
| 0.0 | 0.0048 | 15 | nan |
| 0.0 | 0.0065 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Best000/c862c73e-caf1-4baf-9db3-b77aba2664c6 | Best000 | 2025-01-10T13:19:43Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:peft-internal-testing/tiny-dummy-qwen2",
"base_model:adapter:peft-internal-testing/tiny-dummy-qwen2",
"region:us"
] | null | 2025-01-10T13:19:18Z | ---
library_name: peft
base_model: peft-internal-testing/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c862c73e-caf1-4baf-9db3-b77aba2664c6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: peft-internal-testing/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- faf505f721c74b2f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/faf505f721c74b2f_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Best000/c862c73e-caf1-4baf-9db3-b77aba2664c6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/faf505f721c74b2f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1cb10721-c011-4749-8017-6a8e714bc097
wandb_project: birthday-sn56-16-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1cb10721-c011-4749-8017-6a8e714bc097
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c862c73e-caf1-4baf-9db3-b77aba2664c6
This model is a fine-tuned version of [peft-internal-testing/tiny-dummy-qwen2](https://huggingface.co/peft-internal-testing/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9347 | 0.0038 | 1 | 11.9442 |
| 11.9383 | 0.0114 | 3 | 11.9442 |
| 11.937 | 0.0228 | 6 | 11.9440 |
| 11.9269 | 0.0342 | 9 | 11.9439 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso04/d5f27abc-e96e-4700-b6f3-607118b532a0 | lesso04 | 2025-01-10T13:19:05Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:peft-internal-testing/tiny-dummy-qwen2",
"base_model:adapter:peft-internal-testing/tiny-dummy-qwen2",
"region:us"
] | null | 2025-01-10T13:18:19Z | ---
library_name: peft
base_model: peft-internal-testing/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d5f27abc-e96e-4700-b6f3-607118b532a0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: peft-internal-testing/tiny-dummy-qwen2
bf16: true
chat_template: llama3
datasets:
- data_files:
- faf505f721c74b2f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/faf505f721c74b2f_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso04/d5f27abc-e96e-4700-b6f3-607118b532a0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 77GiB
max_steps: 50
micro_batch_size: 8
mlflow_experiment_name: /tmp/faf505f721c74b2f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 25
save_strategy: steps
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1cb10721-c011-4749-8017-6a8e714bc097
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1cb10721-c011-4749-8017-6a8e714bc097
warmup_steps: 10
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# d5f27abc-e96e-4700-b6f3-607118b532a0
This model is a fine-tuned version of [peft-internal-testing/tiny-dummy-qwen2](https://huggingface.co/peft-internal-testing/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9343 | 0.0076 | 1 | 11.9358 |
| 11.9279 | 0.0380 | 5 | 11.9356 |
| 11.9358 | 0.0760 | 10 | 11.9351 |
| 11.9355 | 0.1141 | 15 | 11.9344 |
| 11.9317 | 0.1521 | 20 | 11.9337 |
| 11.9325 | 0.1901 | 25 | 11.9332 |
| 11.9341 | 0.2281 | 30 | 11.9328 |
| 11.9266 | 0.2662 | 35 | 11.9325 |
| 11.9326 | 0.3042 | 40 | 11.9323 |
| 11.9319 | 0.3422 | 45 | 11.9322 |
| 11.9347 | 0.3802 | 50 | 11.9322 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dzanbek/1dacb01e-22f8-497f-91c2-0295264fcc29 | dzanbek | 2025-01-10T13:18:46Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:peft-internal-testing/tiny-dummy-qwen2",
"base_model:adapter:peft-internal-testing/tiny-dummy-qwen2",
"region:us"
] | null | 2025-01-10T13:18:21Z | ---
library_name: peft
base_model: peft-internal-testing/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1dacb01e-22f8-497f-91c2-0295264fcc29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: peft-internal-testing/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- faf505f721c74b2f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/faf505f721c74b2f_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dzanbek/1dacb01e-22f8-497f-91c2-0295264fcc29
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/faf505f721c74b2f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1cb10721-c011-4749-8017-6a8e714bc097
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1cb10721-c011-4749-8017-6a8e714bc097
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1dacb01e-22f8-497f-91c2-0295264fcc29
This model is a fine-tuned version of [peft-internal-testing/tiny-dummy-qwen2](https://huggingface.co/peft-internal-testing/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0038 | 1 | 11.9363 |
| 11.9352 | 0.0304 | 8 | 11.9356 |
| 11.9298 | 0.0608 | 16 | 11.9338 |
| 11.9331 | 0.0913 | 24 | 11.9326 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vmpsergio/df0fb091-2f67-40a0-88c9-b3cf1e03b1a0 | vmpsergio | 2025-01-10T13:18:29Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:peft-internal-testing/tiny-dummy-qwen2",
"base_model:adapter:peft-internal-testing/tiny-dummy-qwen2",
"region:us"
] | null | 2025-01-10T13:18:11Z | ---
library_name: peft
base_model: peft-internal-testing/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: df0fb091-2f67-40a0-88c9-b3cf1e03b1a0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: peft-internal-testing/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- faf505f721c74b2f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/faf505f721c74b2f_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: vmpsergio/df0fb091-2f67-40a0-88c9-b3cf1e03b1a0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/faf505f721c74b2f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1cb10721-c011-4749-8017-6a8e714bc097
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1cb10721-c011-4749-8017-6a8e714bc097
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# df0fb091-2f67-40a0-88c9-b3cf1e03b1a0
This model is a fine-tuned version of [peft-internal-testing/tiny-dummy-qwen2](https://huggingface.co/peft-internal-testing/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0038 | 1 | 11.9363 |
| 11.9352 | 0.0304 | 8 | 11.9355 |
| 11.9296 | 0.0608 | 16 | 11.9335 |
| 11.9329 | 0.0913 | 24 | 11.9323 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/YamshadowStrangemerges_32_Inex12Yamshadow-GGUF | mradermacher | 2025-01-10T13:17:27Z | 210 | 0 | transformers | [
"transformers",
"gguf",
"Safetensors",
"text-generation-inference",
"merge",
"en",
"base_model:MaziyarPanahi/YamshadowStrangemerges_32_Inex12Yamshadow",
"base_model:quantized:MaziyarPanahi/YamshadowStrangemerges_32_Inex12Yamshadow",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-01-10T12:54:48Z | ---
base_model: MaziyarPanahi/YamshadowStrangemerges_32_Inex12Yamshadow
language:
- en
library_name: transformers
license: apache-2.0
model_creator: MaziyarPanahi
model_name: YamshadowStrangemerges_32_Inex12Yamshadow
quantized_by: mradermacher
tags:
- Safetensors
- text-generation-inference
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MaziyarPanahi/YamshadowStrangemerges_32_Inex12Yamshadow
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Inex12Yamshadow-GGUF/resolve/main/YamshadowStrangemerges_32_Inex12Yamshadow.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Inex12Yamshadow-GGUF/resolve/main/YamshadowStrangemerges_32_Inex12Yamshadow.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Inex12Yamshadow-GGUF/resolve/main/YamshadowStrangemerges_32_Inex12Yamshadow.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Inex12Yamshadow-GGUF/resolve/main/YamshadowStrangemerges_32_Inex12Yamshadow.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Inex12Yamshadow-GGUF/resolve/main/YamshadowStrangemerges_32_Inex12Yamshadow.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Inex12Yamshadow-GGUF/resolve/main/YamshadowStrangemerges_32_Inex12Yamshadow.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Inex12Yamshadow-GGUF/resolve/main/YamshadowStrangemerges_32_Inex12Yamshadow.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Inex12Yamshadow-GGUF/resolve/main/YamshadowStrangemerges_32_Inex12Yamshadow.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Inex12Yamshadow-GGUF/resolve/main/YamshadowStrangemerges_32_Inex12Yamshadow.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Inex12Yamshadow-GGUF/resolve/main/YamshadowStrangemerges_32_Inex12Yamshadow.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Inex12Yamshadow-GGUF/resolve/main/YamshadowStrangemerges_32_Inex12Yamshadow.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/YamshadowStrangemerges_32_Inex12Yamshadow-GGUF/resolve/main/YamshadowStrangemerges_32_Inex12Yamshadow.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
adammandic87/0cb4ed6e-9e93-4ca2-8bdc-3332afbeb42a | adammandic87 | 2025-01-10T13:15:12Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T13:13:18Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0cb4ed6e-9e93-4ca2-8bdc-3332afbeb42a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- de6d74780d92f1e5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/de6d74780d92f1e5_train_data.json
type:
field_instruction: prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: adammandic87/0cb4ed6e-9e93-4ca2-8bdc-3332afbeb42a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/de6d74780d92f1e5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4f770057-7ffb-417a-b383-948f14d743f2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4f770057-7ffb-417a-b383-948f14d743f2
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0cb4ed6e-9e93-4ca2-8bdc-3332afbeb42a
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9597 | 0.0006 | 1 | 2.0278 |
| 2.0966 | 0.0017 | 3 | 2.0272 |
| 1.2491 | 0.0034 | 6 | 2.0092 |
| 2.0283 | 0.0051 | 9 | 1.9764 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Alecardo/Vans-Knu-678119bde5b5b1e8e49eb2ed | Alecardo | 2025-01-10T13:14:47Z | 12 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-01-10T12:59:41Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: 69knuvans
---
# Vans Knu 678119Bde5B5B1E8E49Eb2Ed
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `69knuvans` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Alecardo/Vans-Knu-678119bde5b5b1e8e49eb2ed', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
philip-hightech/7a43e85a-cc2b-4675-985f-3c85c58e0dec | philip-hightech | 2025-01-10T13:14:31Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T13:13:18Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7a43e85a-cc2b-4675-985f-3c85c58e0dec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4ea05e688f1b04bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4ea05e688f1b04bd_train_data.json
type:
field_instruction: question
field_output: query
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/7a43e85a-cc2b-4675-985f-3c85c58e0dec
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/4ea05e688f1b04bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8f44642e-7e83-4010-954d-52d13ace7486
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8f44642e-7e83-4010-954d-52d13ace7486
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7a43e85a-cc2b-4675-985f-3c85c58e0dec
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4816 | 0.0011 | 1 | 1.8152 |
| 1.7393 | 0.0032 | 3 | 1.8035 |
| 1.185 | 0.0064 | 6 | 1.6187 |
| 0.8858 | 0.0097 | 9 | 1.1555 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso03/e302593d-24d9-4972-b008-d679f0f6cabb | lesso03 | 2025-01-10T13:12:36Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T13:09:23Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e302593d-24d9-4972-b008-d679f0f6cabb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-1.5B-Instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- 4ea05e688f1b04bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4ea05e688f1b04bd_train_data.json
type:
field_instruction: question
field_output: query
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso03/e302593d-24d9-4972-b008-d679f0f6cabb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 1.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/4ea05e688f1b04bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 20
save_strategy: steps
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8f44642e-7e83-4010-954d-52d13ace7486
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8f44642e-7e83-4010-954d-52d13ace7486
warmup_steps: 5
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# e302593d-24d9-4972-b008-d679f0f6cabb
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4795 | 0.0011 | 1 | 1.6632 |
| 1.8533 | 0.0043 | 4 | 1.6611 |
| 1.2559 | 0.0086 | 8 | 1.6412 |
| 2.2233 | 0.0129 | 12 | 1.6042 |
| 1.3859 | 0.0172 | 16 | 1.5637 |
| 1.3765 | 0.0215 | 20 | 1.5330 |
| 1.7013 | 0.0258 | 24 | 1.5162 |
| 1.8297 | 0.0301 | 28 | 1.5103 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
StefaniaCri/mt5_romainian_to_emoji_mixed | StefaniaCri | 2025-01-10T13:11:15Z | 26 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-01-10T13:09:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sergioalves/214efd9a-bb0a-4996-a68b-88349ad1d5a0 | sergioalves | 2025-01-10T13:11:13Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T13:09:15Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 214efd9a-bb0a-4996-a68b-88349ad1d5a0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4ea05e688f1b04bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4ea05e688f1b04bd_train_data.json
type:
field_instruction: question
field_output: query
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: sergioalves/214efd9a-bb0a-4996-a68b-88349ad1d5a0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/4ea05e688f1b04bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_hf
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8f44642e-7e83-4010-954d-52d13ace7486
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8f44642e-7e83-4010-954d-52d13ace7486
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 214efd9a-bb0a-4996-a68b-88349ad1d5a0
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_HF with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0011 | 1 | 2.2348 |
| 2.0728 | 0.0086 | 8 | 1.9668 |
| 1.5455 | 0.0172 | 16 | 1.4228 |
| 1.4818 | 0.0258 | 24 | 1.3542 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kostiantynk/d33b2b60-7471-49cc-ac6b-9122a94ee56a | kostiantynk | 2025-01-10T13:10:30Z | 11 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-01-10T13:09:16Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d33b2b60-7471-49cc-ac6b-9122a94ee56a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4ea05e688f1b04bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4ea05e688f1b04bd_train_data.json
type:
field_instruction: question
field_output: query
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/d33b2b60-7471-49cc-ac6b-9122a94ee56a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/4ea05e688f1b04bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8f44642e-7e83-4010-954d-52d13ace7486
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8f44642e-7e83-4010-954d-52d13ace7486
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d33b2b60-7471-49cc-ac6b-9122a94ee56a
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4816 | 0.0011 | 1 | 1.8152 |
| 1.7413 | 0.0032 | 3 | 1.8050 |
| 1.1817 | 0.0064 | 6 | 1.6196 |
| 0.8896 | 0.0097 | 9 | 1.1614 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Triangle104/Llama-Thinker-3B-Preview2-Q8_0-GGUF | Triangle104 | 2025-01-10T13:05:16Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"deep_think",
"reasoning",
"chain_of_thought",
"chain_of_thinking",
"prev_2",
"self_reasoning",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:prithivMLmods/Llama-Thinker-3B-Preview2",
"base_model:quantized:prithivMLmods/Llama-Thinker-3B-Preview2",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-01-10T13:04:02Z | ---
license: creativeml-openrail-m
library_name: transformers
tags:
- deep_think
- reasoning
- chain_of_thought
- chain_of_thinking
- prev_2
- self_reasoning
- llama-cpp
- gguf-my-repo
language:
- en
base_model: prithivMLmods/Llama-Thinker-3B-Preview2
pipeline_tag: text-generation
---
# Triangle104/Llama-Thinker-3B-Preview2-Q8_0-GGUF
This model was converted to GGUF format from [`prithivMLmods/Llama-Thinker-3B-Preview2`](https://huggingface.co/prithivMLmods/Llama-Thinker-3B-Preview2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/prithivMLmods/Llama-Thinker-3B-Preview2) for more details on the model.
---
Model details:
-
Llama-Thinker-3B-Preview2 is a pretrained and instruction-tuned
generative model designed for multilingual applications. These models
are trained using synthetic datasets based on long chains of thought,
enabling them to perform complex reasoning tasks effectively.
Model Architecture: [ Based on Llama 3.2 ] is an autoregressive
language model that uses an optimized transformer architecture. The
tuned versions undergo supervised fine-tuning (SFT) and reinforcement
learning with human feedback (RLHF) to align with human preferences for
helpfulness and safety.
Use with transformers
Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via pip install --upgrade transformers.
import torch
from transformers import pipeline
model_id = "prithivMLmods/Llama-Thinker-3B-Preview2"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Note: You can also find detailed recipes on how to use the model locally, with torch.compile(), assisted generations, quantised and more at huggingface-llama-recipes
Use with llama
Please, follow the instructions in the repository
To download Original checkpoints, see the example command below leveraging huggingface-cli:
huggingface-cli download prithivMLmods/Llama-Thinker-3B-Preview2 --include "original/*" --local-dir Llama-Thinker-3B-Preview2
Hereβs a version tailored for the Llama-Thinker-3B-Preview2-GGUF model:
How to Run Llama-Thinker-3B-Preview2 on Ollama Locally
This guide demonstrates how to run the Llama-Thinker-3B-Preview2-GGUF
model locally using Ollama. The model is instruction-tuned for
multilingual tasks and complex reasoning, making it highly versatile for
a wide range of use cases. By the end, you'll be equipped to run this
and other open-source models with ease.
Example 1: How to Run the Llama-Thinker-3B-Preview2 Model
The Llama-Thinker-3B-Preview2 model is a pretrained
and instruction-tuned LLM, designed for complex reasoning tasks across
multiple languages. In this guide, we'll interact with it locally using
Ollama, with support for quantized models.
Step 1: Download the Model
First, download the Llama-Thinker-3B-Preview2-GGUF model using the following command:
ollama run llama-thinker-3b-preview2.gguf
Step 2: Model Initialization and Download
Once the command is executed, Ollama will initialize and download the
necessary model files. You should see output similar to this:
pulling manifest
pulling a12cd3456efg... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 3.2 GB
pulling 9f87ghijklmn... 100% ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ 6.5 KB
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> Send a message (/? for help)
Step 3: Interact with the Model
Once the model is fully loaded, you can interact with it by sending prompts. For example, let's ask:
>>> How can you assist me today?
A sample response might look like this [may / maynot be identical]:
I am Llama-Thinker-3B-Preview2, an advanced AI language model designed to assist with complex reasoning, multilingual tasks, and general-purpose queries. Here are a few things I can help you with:
1. Answering complex questions in multiple languages.
2. Assisting with creative writing, content generation, and problem-solving.
3. Providing detailed summaries and explanations.
4. Translating text across different languages.
5. Generating ideas for personal or professional use.
6. Offering insights on technical topics.
Feel free to ask me anything you'd like assistance with!
Step 4: Exit the Program
To exit the program, simply type:
/exit
Example 2: Using Multi-Modal Models (Future Use)
In the future, Ollama may support multi-modal models where you can
input both text and images for advanced interactions. This section will
be updated as new capabilities become available.
Notes on Using Quantized Models
Quantized models like llama-thinker-3b-preview2.gguf
are optimized for efficient performance on local systems with limited
resources. Here are some key points to ensure smooth operation:
VRAM/CPU Requirements: Ensure your system has adequate VRAM or CPU resources to handle model inference.
Model Format: Use the .gguf model format for compatibility with Ollama.
Conclusion
Running the Llama-Thinker-3B-Preview2 model locally
using Ollama provides a powerful way to leverage open-source LLMs for
complex reasoning and multilingual tasks. By following this guide, you
can explore other models and expand your use cases as new models become
available.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-Thinker-3B-Preview2-Q8_0-GGUF --hf-file llama-thinker-3b-preview2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-Thinker-3B-Preview2-Q8_0-GGUF --hf-file llama-thinker-3b-preview2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-Thinker-3B-Preview2-Q8_0-GGUF --hf-file llama-thinker-3b-preview2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-Thinker-3B-Preview2-Q8_0-GGUF --hf-file llama-thinker-3b-preview2-q8_0.gguf -c 2048
```
|
Subsets and Splits