modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
kammbo/klue-roberta-base-klue-sts | kammbo | 2025-04-29T01:37:15Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10501",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-29T01:36:22Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10501
- loss:CosineSimilarityLoss
base_model: klue/roberta-base
widget:
- source_sentence: μ λ½λ°(ηΌ) μ
κ΅μ κ²μ κ°νμ‘°μΉκ° μνλ 첫 λ μΈ 22μΌΒ μ μ¦μμ 152λͺ
μ΄ κ³΅ν 격리μμ€μμ 격리 λ° μ§λ¨κ²μ¬λ₯Ό
λ°μλ€.
sentences:
- μ΄ κ²½μ° κ²©λ¦¬ λ©΄μ λ ν΄μΈ κΈ°μ
μΈμ΄ κ΅λ΄ μ
κ΅ μ μμ격리μμ€(1λ° 2μΌ)μμ κ²μ¬λ₯Ό λ°μ ν κ²μ¬κ²°κ³Ό μμ±μΌλ‘ νμ λ κ²½μ° μ΅μ’
μ μΌλ‘ 격리
λ©΄μ κ° μ΄λ€μ§λ€.
- κΈνκ² λ©μΌμ 보λ΄μ§ λ§κ³ λ°μ‘ μ μ μ°¨λΆνκ² 2λ² νμΈνλλ‘ ν΄
- νΈλ¦½λ·μ»΄ μμ½ μ λ©μΌμ λ°μΌλ©΄ μ΄λ€ μ λ³΄κ° λ΄κ²¨μμ΄?
- source_sentence: μμμ 체ν¬μΈ μ₯μκ° μ’ λ€λ₯΄λ μ°Έκ³ νμΈμ.
sentences:
- μλλ²λ¬μ λ€λ₯΄μ€ μμ μ΄λΌλ©΄ κ°μΆν©λλ€!
- νμ₯μ€μ΄ μλμ μΌλ‘ μ’μ§λ§ λΆνΈν μ λλ μλλλ€.
- μλ°μμ€κ³Ό 체ν¬μΈ μ₯μκ° μ‘°κΈ λ€λ¦
λλ€.
- source_sentence: λ΄μΌ ν©μ¬ μ§μ μλ €μ€.
sentences:
- λ¦μ΄λ΄€μ μ’μ κ±° μμΌλκΉ νμ¬ μμ¬νκ³ μ λ
μ½μμλ λ¦μ§ λ§.
- μ΅λ κ°μ μκ°λλ? μμΈ μ§μ.
- μ¬ν΄ 17μ‘° 5000μ΅μμμ 2024λ
μλ 21μ‘°κΉμ§ νλν λ°©μΉ¨μ΄λ€.
- source_sentence: λ΄λ
μλ°κΈ°μλ βμμ°μ± νμ μ μν μ€μ₯κΈ° μ λ΅κ³Όμ βλ₯Ό μ립νλ λμμ κ³ λ Ήμλ€μ κ³μκ³ μ©μ΄ νμ±νλ μ μλλ‘
κΈ°μ
κ³ μ©λΆλ΄μ μνν μλ‘μ΄ κ³ λ Ήμ μΌμ리 λͺ¨λΈλ λ§λ ¨νλ€.
sentences:
- λ¬Έν체μ‘κ΄κ΄λΆκ° μ½λ‘λ19 극볡μ μν μμ λ‘κ³ λ₯Ό κ΅λ―Όλ€μ΄ 무λ£λ‘ νμ©ν μ μλλ‘ λ°°ν¬νλ€.
- νμ¬λ λ§κ°μΌ μ΄ν μ κ³ μμ μ κ³ μ μ€λ₯κ° μμ κ²½μ° μ μ μμ² λ° μ μ μ κ³ κ° λΆκ°λ₯νμΌλ μμΌλ‘λ μ κ³ μμκ²λ μ μ κΈ°νλ₯Ό λΆμ¬ν©λλ€.
- μλ΄μ μ§νμ² μ¬μ΄μ 거리λ κ°κΉμ μ΅λλ€.
- source_sentence: μλμ¦μμ 보λ΄λ κ΄κ³ λ©μΌμ λ°μ§λ§
sentences:
- μ§λ©μΌμ μ°λ©΄ 첨λΆνμΌμ λͺκ°κΉμ§ λ³΄λΌ μ μμ§?
- λ¬Έ μ¬λ λ°©λ²λ λΉκ΅μ μ½κ³ κ°λ¨ν©λλ€!
- κ·Έλ€μ λμκ² λ§₯μ£Ό λ μΊκ³Ό μ£Όμ€ κ·Έλ¦¬κ³ λ¬Όμ μ£Όμμ΅λλ€.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on klue/roberta-base
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.3477070672169138
name: Pearson Cosine
- type: spearman_cosine
value: 0.35560473197486514
name: Spearman Cosine
- type: pearson_cosine
value: 0.9609074593444991
name: Pearson Cosine
- type: spearman_cosine
value: 0.9191116352550575
name: Spearman Cosine
---
# SentenceTransformer based on klue/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [klue/roberta-base](https://huggingface.co/klue/roberta-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [klue/roberta-base](https://huggingface.co/klue/roberta-base) <!-- at revision 02f94ba5e3fcb7e2a58a390b8639b0fac974a8da -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'μλμ¦μμ 보λ΄λ κ΄κ³ λ©μΌμ λ°μ§λ§',
'μ§λ©μΌμ μ°λ©΄ 첨λΆνμΌμ λͺκ°κΉμ§ λ³΄λΌ μ μμ§?',
'λ¬Έ μ¬λ λ°©λ²λ λΉκ΅μ μ½κ³ κ°λ¨ν©λλ€!',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.3477 |
| **spearman_cosine** | **0.3556** |
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.9609 |
| **spearman_cosine** | **0.9191** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,501 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 7 tokens</li><li>mean: 20.09 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 19.5 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.44</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------|:---------------------------------|
| <code>μ§λ 1990λ
νλλμμ μ²μ λμ
ν βνμμΈβλ₯Ό μ¨μ€κ°μ€ μ κ°Β μλ¨μΌλ‘ νμ©νλ €λ κ°κ΅μ μμ§μλ νλ°νλ€.</code> | <code>κ°κ΅μ λν μ¨μ€ κ°μ€ λ°°μΆμ μ€μ΄κΈ° μν μλ¨μΌλ‘ 1990λ
νλλμμ μ²μ λμ
λ 'νμ μΈκΈ'μ μ κ·Ήμ μΌλ‘ μ¬μ©νλ €κ³ νκ³ μμ΅λλ€.</code> | <code>0.42000000000000004</code> |
| <code>κ·Έλ¬λ―λ‘ μ κ·Όμ² μμκ° κ°μ₯ νΈλ¦¬νλ€κ³ μκ°ν©λλ€.</code> | <code>κ·Έλμ μ λ μ κ·Όμ²μ μλ μμκ° κ°μ₯ νΈλ¦¬νλ€κ³ μκ°ν©λλ€.</code> | <code>0.82</code> |
| <code>λλ κ·Έ μΌνμΌλ‘ BCμΉ΄λ λ§€μΆμ 64%λ 10μ΅μ μ΄μ λ§€μ₯μμ μ¬μ©λλ λ°λ©΄ μ§μννλ 3μ΅μ λ―Έλ§ λ§€μ₯μμ μ¬μ©λλ λΉμ¨μ΄ 36.7%λΌλ κ·Όκ±°λ₯Ό μ μνλ€.</code> | <code>μμΈμ 15μ΅μ μ΄κ³Ό μ§κ° μμΉλ₯ μ 12μ 3μ£Ό 0.40%μμ 1μ 1μ£Ό -0.08%λ‘ νλ½ μ ννλ€.</code> | <code>0.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|
| -1 | -1 | - | 0.3556 |
| 0.0761 | 50 | - | 0.8659 |
| 0.1522 | 100 | - | 0.8890 |
| 0.2283 | 150 | - | 0.8969 |
| 0.3044 | 200 | - | 0.9008 |
| 0.3805 | 250 | - | 0.9026 |
| 0.4566 | 300 | - | 0.9055 |
| 0.5327 | 350 | - | 0.9023 |
| 0.6088 | 400 | - | 0.9076 |
| 0.6849 | 450 | - | 0.9019 |
| 0.7610 | 500 | 0.0282 | 0.9067 |
| 0.8371 | 550 | - | 0.9060 |
| 0.9132 | 600 | - | 0.9090 |
| 0.9893 | 650 | - | 0.9074 |
| 1.0 | 657 | - | 0.9077 |
| 1.0654 | 700 | - | 0.9091 |
| 1.1416 | 750 | - | 0.9120 |
| 1.2177 | 800 | - | 0.9085 |
| 1.2938 | 850 | - | 0.9117 |
| 1.3699 | 900 | - | 0.9137 |
| 1.4460 | 950 | - | 0.9126 |
| 1.5221 | 1000 | 0.008 | 0.9137 |
| 1.5982 | 1050 | - | 0.9148 |
| 1.6743 | 1100 | - | 0.9155 |
| 1.7504 | 1150 | - | 0.9134 |
| 1.8265 | 1200 | - | 0.9141 |
| 1.9026 | 1250 | - | 0.9142 |
| 1.9787 | 1300 | - | 0.9155 |
| 2.0 | 1314 | - | 0.9163 |
| 2.0548 | 1350 | - | 0.9174 |
| 2.1309 | 1400 | - | 0.9177 |
| 2.2070 | 1450 | - | 0.9171 |
| 2.2831 | 1500 | 0.005 | 0.9191 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
wangyingjia8/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_wily_ant | wangyingjia8 | 2025-04-29T01:37:02Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am whiskered wily ant",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T09:56:38Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_wily_ant
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am whiskered wily ant
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_wily_ant
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="wangyingjia8/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-whiskered_wily_ant", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jimmypan/llama381binstruct_summarize_short | jimmypan | 2025-04-29T01:36:57Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:NousResearch/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:NousResearch/Meta-Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T01:36:39Z | ---
base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: llama381binstruct_summarize_short
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama381binstruct_summarize_short
This model is a fine-tuned version of [NousResearch/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jimmypan/llama381binstruct_summarize_short", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/moonshade9-amazon/huggingface/runs/piexpda8)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
magvtv/rada-nlp | magvtv | 2025-04-29T01:36:05Z | 46 | 1 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-15T08:45:50Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: rada-nlp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rada-nlp
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6418
- Rouge1: 32.2628
- Rouge2: 17.6188
- Rougel: 28.3685
- Rougelsum: 28.3035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.8877 | 1.0 | 4 | 2.6676 | 32.1274 | 17.427 | 27.543 | 28.0416 |
| 1.8556 | 2.0 | 8 | 2.6705 | 31.2511 | 16.3095 | 26.6854 | 26.8166 |
| 1.8127 | 3.0 | 12 | 2.6705 | 31.037 | 16.0077 | 26.813 | 26.6464 |
| 1.784 | 4.0 | 16 | 2.6686 | 31.5008 | 16.2333 | 26.9957 | 26.7969 |
| 1.7672 | 5.0 | 20 | 2.6711 | 31.2118 | 15.9968 | 26.9476 | 26.9864 |
| 1.7407 | 6.0 | 24 | 2.6716 | 31.4189 | 15.9951 | 26.8681 | 26.7424 |
| 1.742 | 7.0 | 28 | 2.6701 | 30.9705 | 16.0005 | 26.5473 | 26.8081 |
| 1.7356 | 8.0 | 32 | 2.6687 | 31.906 | 17.254 | 27.7267 | 27.6687 |
| 1.7271 | 9.0 | 36 | 2.6654 | 31.8302 | 17.1851 | 27.4294 | 27.4945 |
| 1.7224 | 10.0 | 40 | 2.6606 | 31.5091 | 17.1353 | 27.8425 | 27.5751 |
| 1.7207 | 11.0 | 44 | 2.6575 | 31.6189 | 17.3582 | 27.5163 | 27.519 |
| 1.7404 | 12.0 | 48 | 2.6539 | 32.0071 | 17.1878 | 27.6051 | 27.7916 |
| 1.7213 | 13.0 | 52 | 2.6504 | 32.6314 | 17.5002 | 28.0328 | 28.0245 |
| 1.7606 | 14.0 | 56 | 2.6472 | 32.5161 | 17.4726 | 28.16 | 28.4421 |
| 1.7839 | 15.0 | 60 | 2.6444 | 32.3599 | 17.9836 | 27.9445 | 28.0023 |
| 1.812 | 16.0 | 64 | 2.6418 | 32.2628 | 17.6188 | 28.3685 | 28.3035 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
li55555/zephyr_spin_iter2 | li55555 | 2025-04-29T01:33:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T01:29:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zack-Z/gemma3_27bi_cotsft_rs0_3_5cut_ru_gem3_e2 | Zack-Z | 2025-04-29T01:30:25Z | 0 | 0 | transformers | [
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-27b-it",
"base_model:finetune:unsloth/gemma-3-27b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T23:58:31Z | ---
base_model: unsloth/gemma-3-27b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-27b-it
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mhr2004/roberta-large-stsb-lr2e-05-bs32 | mhr2004 | 2025-04-29T01:27:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-04-29T01:11:06Z | ---
library_name: transformers
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
model-index:
- name: roberta-large-stsb-lr2e-05-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-stsb-lr2e-05-bs32
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0166
- Pearson: 0.9185
- Spearman: 0.9187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearman |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:--------:|
| 0.0509 | 1.0 | 180 | 0.0232 | 0.8807 | 0.8813 |
| 0.0327 | 2.0 | 360 | 0.0201 | 0.9042 | 0.9041 |
| 0.0263 | 3.0 | 540 | 0.0165 | 0.9119 | 0.9097 |
| 0.0216 | 4.0 | 720 | 0.0223 | 0.9162 | 0.9153 |
| 0.0206 | 5.0 | 900 | 0.0143 | 0.9188 | 0.9175 |
| 0.0183 | 6.0 | 1080 | 0.0186 | 0.9180 | 0.9164 |
| 0.0161 | 7.0 | 1260 | 0.0151 | 0.9220 | 0.9203 |
| 0.0137 | 8.0 | 1440 | 0.0141 | 0.9203 | 0.9189 |
| 0.0124 | 9.0 | 1620 | 0.0179 | 0.9218 | 0.9200 |
| 0.0112 | 10.0 | 1800 | 0.0144 | 0.9215 | 0.9214 |
| 0.0113 | 11.0 | 1980 | 0.0150 | 0.9218 | 0.9198 |
| 0.0093 | 12.0 | 2160 | 0.0144 | 0.9181 | 0.9171 |
| 0.0089 | 13.0 | 2340 | 0.0166 | 0.9185 | 0.9187 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
|
Lucy-in-the-Sky/Dolphin-Mistral-24B-Venice-Edition-Q6_K-GGUF | Lucy-in-the-Sky | 2025-04-29T01:26:34Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"base_model:quantized:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T01:25:08Z | ---
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Lucy-in-the-Sky/Dolphin-Mistral-24B-Venice-Edition-Q6_K-GGUF
This model was converted to GGUF format from [`cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition`](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Lucy-in-the-Sky/Dolphin-Mistral-24B-Venice-Edition-Q6_K-GGUF --hf-file dolphin-mistral-24b-venice-edition-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Lucy-in-the-Sky/Dolphin-Mistral-24B-Venice-Edition-Q6_K-GGUF --hf-file dolphin-mistral-24b-venice-edition-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Lucy-in-the-Sky/Dolphin-Mistral-24B-Venice-Edition-Q6_K-GGUF --hf-file dolphin-mistral-24b-venice-edition-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Lucy-in-the-Sky/Dolphin-Mistral-24B-Venice-Edition-Q6_K-GGUF --hf-file dolphin-mistral-24b-venice-edition-q6_k.gguf -c 2048
```
|
greenwich157/Qwen2.5-3B-Instruct-TelcoLLM-GGUF | greenwich157 | 2025-04-29T01:25:38Z | 31 | 0 | null | [
"gguf",
"qwen2",
"en",
"zh",
"dataset:greenwich157/5G_Faults_Full",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-27T02:31:19Z | ---
license: apache-2.0
datasets:
- greenwich157/5G_Faults_Full
language:
- en
- zh
base_model:
- Qwen/Qwen2.5-3B-Instruct
---
**5G mobile network faults suitable for engineer evaluation, based on synthetic dataset** |
Lucy-in-the-Sky/Qwen2.5-1.5B-Instruct-Q6_K-GGUF | Lucy-in-the-Sky | 2025-04-29T01:24:56Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-02-20T21:14:16Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- chat
- llama-cpp
- gguf-my-repo
library_name: transformers
---
# Lucy-in-the-Sky/Qwen2.5-1.5B-Instruct-Q6_K-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-1.5B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Lucy-in-the-Sky/Qwen2.5-1.5B-Instruct-Q6_K-GGUF --hf-file qwen2.5-1.5b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Lucy-in-the-Sky/Qwen2.5-1.5B-Instruct-Q6_K-GGUF --hf-file qwen2.5-1.5b-instruct-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Lucy-in-the-Sky/Qwen2.5-1.5B-Instruct-Q6_K-GGUF --hf-file qwen2.5-1.5b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Lucy-in-the-Sky/Qwen2.5-1.5B-Instruct-Q6_K-GGUF --hf-file qwen2.5-1.5b-instruct-q6_k.gguf -c 2048
```
|
infogep/8559b4d9-7871-4dbb-ac96-1b77aaa15f2f | infogep | 2025-04-29T01:21:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T01:15:53Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8559b4d9-7871-4dbb-ac96-1b77aaa15f2f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 67114b4672ccfa56_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/67114b4672ccfa56_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: infogep/8559b4d9-7871-4dbb-ac96-1b77aaa15f2f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/67114b4672ccfa56_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4365af0f-8b36-406d-b2f7-4d21c6c582bd
wandb_project: s56-30
wandb_run: your_name
wandb_runid: 4365af0f-8b36-406d-b2f7-4d21c6c582bd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8559b4d9-7871-4dbb-ac96-1b77aaa15f2f
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.1212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.4729 | 0.1201 | 200 | 9.1212 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
marialvsantiago/fb5d6bfb-6a71-4d7a-93bf-7e0f852fae50 | marialvsantiago | 2025-04-29T01:21:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T01:17:14Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fb5d6bfb-6a71-4d7a-93bf-7e0f852fae50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 67114b4672ccfa56_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/67114b4672ccfa56_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: marialvsantiago/fb5d6bfb-6a71-4d7a-93bf-7e0f852fae50
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/67114b4672ccfa56_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4365af0f-8b36-406d-b2f7-4d21c6c582bd
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 4365af0f-8b36-406d-b2f7-4d21c6c582bd
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fb5d6bfb-6a71-4d7a-93bf-7e0f852fae50
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.0095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 7.4489 | 0.1201 | 200 | 9.0095 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Lucy-in-the-Sky/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF | Lucy-in-the-Sky | 2025-04-29T01:18:00Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"base_model:quantized:cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T01:16:49Z | ---
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Lucy-in-the-Sky/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF
This model was converted to GGUF format from [`cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition`](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Lucy-in-the-Sky/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Lucy-in-the-Sky/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Lucy-in-the-Sky/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Lucy-in-the-Sky/Dolphin-Mistral-24B-Venice-Edition-Q4_K_M-GGUF --hf-file dolphin-mistral-24b-venice-edition-q4_k_m.gguf -c 2048
```
|
jayellho/whisper-large-v3-turbo-imda-part4-bs32-grad4-dl4-4h200-splbat-16cpus-optiLR-1600maxst-perstwkrs | jayellho | 2025-04-29T01:16:44Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:data_loading_script",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-04-29T00:38:32Z | ---
library_name: transformers
tags:
- generated_from_trainer
datasets:
- data_loading_script
model-index:
- name: whisper-large-v3-turbo-imda-part4-bs32-grad4-dl4-4h200-splbat-16cpus-optiLR-1600maxst-perstwkrs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3-turbo-imda-part4-bs32-grad4-dl4-4h200-splbat-16cpus-optiLR-1600maxst-perstwkrs
This model was trained from scratch on the data_loading_script dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.0417199205400627e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 160
- training_steps: 1600
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.4.1+cu121
- Datasets 3.5.0
- Tokenizers 0.21.1
|
peterwa/Qwen2.5-7B-instruct-GRPO-GSM8K | peterwa | 2025-04-29T01:16:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T01:09:16Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vmpsergio/27ddc76e-6f2a-404d-8369-0ec4c2735092 | vmpsergio | 2025-04-29T01:16:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T00:39:10Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 27ddc76e-6f2a-404d-8369-0ec4c2735092
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 09fd8de16e0ef037_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/09fd8de16e0ef037_train_data.json
type:
field_input: Patient
field_instruction: Description
field_output: Doctor
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/27ddc76e-6f2a-404d-8369-0ec4c2735092
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/09fd8de16e0ef037_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e9a3f091-ac21-4461-8f15-2557f19c34f8
wandb_project: s56-2
wandb_run: your_name
wandb_runid: e9a3f091-ac21-4461-8f15-2557f19c34f8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 27ddc76e-6f2a-404d-8369-0ec4c2735092
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.0341 | 0.0066 | 200 | 2.5952 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nHTDayQrFAXhHAY/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-timid_fierce_ladybug | nHTDayQrFAXhHAY | 2025-04-29T01:15:51Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am timid fierce ladybug",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T15:41:05Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-timid_fierce_ladybug
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am timid fierce ladybug
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-timid_fierce_ladybug
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nHTDayQrFAXhHAY/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-timid_fierce_ladybug", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
joelm/llama-3.1-8b-ai-to-pg-finetune-GGUF | joelm | 2025-04-29T01:15:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T01:14:23Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** joelm
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tFQbekUPTuNgAxFkR/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_wiry_butterfly | tFQbekUPTuNgAxFkR | 2025-04-29T01:11:45Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lightfooted wiry butterfly",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T11:52:29Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_wiry_butterfly
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lightfooted wiry butterfly
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_wiry_butterfly
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tFQbekUPTuNgAxFkR/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lightfooted_wiry_butterfly", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Cozmicalz/Irix-12B-Model_Stock-mlx-4Bit | Cozmicalz | 2025-04-29T01:10:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"mlx",
"mlx-my-repo",
"conversational",
"base_model:DreadPoor/Irix-12B-Model_Stock",
"base_model:quantized:DreadPoor/Irix-12B-Model_Stock",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-04-29T01:09:56Z | ---
base_model: DreadPoor/Irix-12B-Model_Stock
library_name: transformers
tags:
- mergekit
- merge
- mlx
- mlx-my-repo
---
# Cozmicalz/Irix-12B-Model_Stock-mlx-4Bit
The Model [Cozmicalz/Irix-12B-Model_Stock-mlx-4Bit](https://huggingface.co/Cozmicalz/Irix-12B-Model_Stock-mlx-4Bit) was converted to MLX format from [DreadPoor/Irix-12B-Model_Stock](https://huggingface.co/DreadPoor/Irix-12B-Model_Stock) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Cozmicalz/Irix-12B-Model_Stock-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
phospho-app/nebo1337-GetTheRubberNextG2-mkz2etcus0 | phospho-app | 2025-04-29T01:03:39Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-04-28T23:54:23Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [nebo1337/GetTheRubberNextG2](https://huggingface.co/datasets/nebo1337/GetTheRubberNextG2)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 8000
π **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
π€ **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
BSC-NLP4BIA/BIOMAT-AnatNER-MTL | BSC-NLP4BIA | 2025-04-29T01:01:01Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-04-29T01:00:21Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
bmFVHfwBm0ktSackD3/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-waddling_colorful_ostrich | bmFVHfwBm0ktSackD3 | 2025-04-29T00:59:34Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am waddling colorful ostrich",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T13:27:03Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-waddling_colorful_ostrich
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am waddling colorful ostrich
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-waddling_colorful_ostrich
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bmFVHfwBm0ktSackD3/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-waddling_colorful_ostrich", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
VishnuT/llama3-qlora-phase2.2-adapter | VishnuT | 2025-04-29T00:56:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:adapter:meta-llama/Llama-3.2-3B",
"region:us"
] | null | 2025-04-29T00:49:10Z | ---
base_model: meta-llama/Llama-3.2-3B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
joelm/llama-3.1-8b-ai-to-pg-finetune-16bit | joelm | 2025-04-29T00:55:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T00:54:37Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** joelm
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fQNrIdeWOYvDBCMqov/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grassy_mammalian_macaque | fQNrIdeWOYvDBCMqov | 2025-04-29T00:55:10Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am grassy mammalian macaque",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T11:38:09Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grassy_mammalian_macaque
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am grassy mammalian macaque
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grassy_mammalian_macaque
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fQNrIdeWOYvDBCMqov/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grassy_mammalian_macaque", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MikeRoz/TheDrummer_Fallen-Gemma3-27B-v1-6.0bpw-h8-exl2 | MikeRoz | 2025-04-29T00:55:08Z | 0 | 0 | null | [
"safetensors",
"gemma3_text",
"exl2",
"license:other",
"6-bit",
"region:us"
] | null | 2025-04-28T23:29:21Z | ---
license: other
base_model: TheDrummer/Fallen-Gemma3-27b-v1
base_model_relation: quantized
tags:
- exl2
---
This model was quantized using commit 3a90264 of the dev branch of exllamav2. The Gemma 3 8k context bug looks to be thoroughly squashed as of this commit. To use this model, please either build your own copy of exllamav2 from the dev branch, or wait for the forthcoming v0.2.9 release.
The original model can be found [here](https://huggingface.co/TheDrummer/Fallen-Gemma3-27B-v1).
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 5000 members of helpful, LLM enthusiasts! A hub for players and makers alike!
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Fallen Gemma3 27B v1 πΊ

## Special Thanks
- Thank you to each and everyone who donated and subscribed in [Patreon](https://www.patreon.com/TheDrummer) and [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
- I'm also recently unemployed. I am a Software Developer with 8 years of experience in Web, API, AI, and adapting to new tech and requirements. If you're hiring, feel free to reach out to me however.
## Usage
- Use Gemma Chat Template
## Description
Fallen Gemma3 27B v1 is an evil tune of Gemma 3 27B but it is not a complete decensor.
Evil tunes knock out the positivity and may enjoy torturing you and humanity.
Vision still works and it has something to say about the crap you feed it.
## Links
- Original: https://huggingface.co/TheDrummer/Fallen-Gemma3-27B-v1
- GGUF: https://huggingface.co/TheDrummer/Fallen-Gemma3-27B-v1-GGUF
- iMatrix (recommended): https://huggingface.co/bartowski/TheDrummer_Fallen-Gemma3-27B-v1-GGUF
`config-v1c`
|
pandorakevin/pandorakevi | pandorakevin | 2025-04-29T00:54:33Z | 0 | 0 | null | [
"license:bsd-3-clause-clear",
"region:us"
] | null | 2025-04-29T00:54:33Z | ---
license: bsd-3-clause-clear
---
|
thomasjthe/SmolLM2-FT-MyDataset | thomasjthe | 2025-04-29T00:53:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-29T00:52:41Z | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="thomasjthe/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/thomashe42-university-of-melbourne/huggingface/runs/z86k0ddc)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kokovova/c172c020-3d0e-4a0c-a72f-0af785cff78b | kokovova | 2025-04-29T00:53:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T00:40:12Z | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c172c020-3d0e-4a0c-a72f-0af785cff78b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 09fd8de16e0ef037_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/09fd8de16e0ef037_train_data.json
type:
field_input: Patient
field_instruction: Description
field_output: Doctor
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: kokovova/c172c020-3d0e-4a0c-a72f-0af785cff78b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/09fd8de16e0ef037_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e9a3f091-ac21-4461-8f15-2557f19c34f8
wandb_project: s56-4
wandb_run: your_name
wandb_runid: e9a3f091-ac21-4461-8f15-2557f19c34f8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# c172c020-3d0e-4a0c-a72f-0af785cff78b
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.063 | 0.0066 | 200 | 2.6110 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
BTOREYES/albertreyes | BTOREYES | 2025-04-29T00:51:06Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-04-29T00:10:34Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Ej9m6yillwiPBWTyMI1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-furry_mimic_mule | Ej9m6yillwiPBWTyMI1 | 2025-04-29T00:51:02Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am furry mimic mule",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T13:52:28Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-furry_mimic_mule
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am furry mimic mule
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-furry_mimic_mule
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Ej9m6yillwiPBWTyMI1/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-furry_mimic_mule", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MoyYuan/DeductiveReasoning-forward-explicit | MoyYuan | 2025-04-29T00:49:24Z | 0 | 0 | null | [
"pytorch",
"bert",
"en",
"dataset:MoyYuan/DeductiveReasoning",
"license:mit",
"region:us"
] | null | 2025-04-29T00:43:28Z | ---
license: mit
datasets:
- MoyYuan/DeductiveReasoning
language:
- en
---
Please refer to https://huggingface.co/datasets/MoyYuan/DeductiveReasoning for README information. |
Asif-Sheriff/QAC3 | Asif-Sheriff | 2025-04-29T00:47:36Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-04-11T13:47:36Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
model-index:
- name: QAC3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QAC3
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.51.1
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
MikeRoz/allura-org_Gemma-3-Glitter-27B-6.0bpw-h8-exl2 | MikeRoz | 2025-04-29T00:45:49Z | 0 | 0 | exllamav2 | [
"exllamav2",
"safetensors",
"gemma3",
"exl2",
"base_model:allura-org/Gemma-3-Glitter-27B",
"base_model:quantized:allura-org/Gemma-3-Glitter-27B",
"6-bit",
"region:us"
] | null | 2025-04-28T19:14:11Z | ---
base_model: allura-org/Gemma-3-Glitter-27B
base_model_relation: quantized
library_name: exllamav2
tags:
- exl2
---
This model was quantized using commit 3a90264 of the dev branch of exllamav2. The 8k context bug looks to be thoroughly squashed as of this commit. To use this model, please either build your own copy of exllamav2 from the dev branch, or wait for the forthcoming v0.2.9 release.
The original model can be found [here](https://huggingface.co/allura-org/Gemma-3-Glitter-27B).
# β¨G3 Glitter 27Bβ¨
<figure>
<img src="https://huggingface.co/ToastyPigeon/Gemma-3-Glitter-27B/resolve/main/ComfyUI_02512_.png" width="600">
</figure>
A creative writing model based on Gemma 3 27B.
[Columbidae/gemma-3-27b-half](https://huggingface.co/Columbidae/gemma-3-27b-half), a 50/50 merge of 27B IT and 27B PT, was used as the base model. (This was done because of the success of [Starshine](https://huggingface.co/ToastyPigeon/Gemma-3-Starshine-12B), a 50/50 IT and PT merge.)
The inclusion of PT model does weaken the instruct, but it also weakens the censorship/hesitancy to participate in certain fictional stories. The prose also becomes more natural with less of the IT model included.
**This model does better with short and to-the-point prompts. Long, detailed system prompts will often confuse it.** (Tested with 1000-2000 token system prompts to lackluster results compared to 100-500 token prompts).
## Instruct Format
Uses Gemma2/3 instruct and context. Like Glitter 12b, this works well with `temp = 1, top-nsigma = 1.5`.
```
<start_of_turn>user
{User messages; can also put sysprompt here to use the built-in g3 training}<end_of_turn>
<start_of_turn>model
{model response}<end_of_turn>
``` |
MoyYuan/DeductiveReasoning-forward | MoyYuan | 2025-04-29T00:44:48Z | 0 | 0 | null | [
"pytorch",
"bert",
"en",
"dataset:MoyYuan/DeductiveReasoning",
"license:mit",
"region:us"
] | null | 2025-04-29T00:21:06Z | ---
license: mit
datasets:
- MoyYuan/DeductiveReasoning
language:
- en
---
Please refer to https://huggingface.co/datasets/MoyYuan/DeductiveReasoning for README information. |
Rkdon11/deberta-v3-large-osint-cybersecurity-ner | Rkdon11 | 2025-04-29T00:41:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-04-29T00:39:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cristiandouglas777/Projet2 | cristiandouglas777 | 2025-04-29T00:39:25Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T00:39:25Z | ---
license: apache-2.0
---
|
raraujo/peft-granite-lora-a100 | raraujo | 2025-04-29T00:34:42Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ibm-granite/granite-3b-code-instruct-2k",
"base_model:adapter:ibm-granite/granite-3b-code-instruct-2k",
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T23:41:55Z | ---
library_name: peft
license: apache-2.0
base_model: ibm-granite/granite-3b-code-instruct-2k
tags:
- generated_from_trainer
model-index:
- name: peft-granite-lora-a100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-granite-lora-a100
This model is a fine-tuned version of [ibm-granite/granite-3b-code-instruct-2k](https://huggingface.co/ibm-granite/granite-3b-code-instruct-2k) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1 |
takedakoji00/Llama-3.1-8B-Instruct-custom-qg-7th_val_val_edit_distance_1000epoch_empty_removed | takedakoji00 | 2025-04-29T00:28:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T03:34:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlfoundations-dev/d1_science_long_paragraphs_3k | mlfoundations-dev | 2025-04-29T00:27:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T19:56:32Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_science_long_paragraphs_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_science_long_paragraphs_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_science_long_paragraphs_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
KYUNGYONG/EEVE-Korean-Instruct-7B-v2.0-Preview-mlx-4Bit | KYUNGYONG | 2025-04-29T00:25:59Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"generated_from_trainer",
"mlx-my-repo",
"base_model:yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview",
"base_model:quantized:yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2025-04-29T00:25:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
- mlx
- mlx-my-repo
base_model: yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview
model-index:
- name: yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview
results: []
---
# KYUNGYONG/EEVE-Korean-Instruct-7B-v2.0-Preview-mlx-4Bit
The Model [KYUNGYONG/EEVE-Korean-Instruct-7B-v2.0-Preview-mlx-4Bit](https://huggingface.co/KYUNGYONG/EEVE-Korean-Instruct-7B-v2.0-Preview-mlx-4Bit) was converted to MLX format from [yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview](https://huggingface.co/yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("KYUNGYONG/EEVE-Korean-Instruct-7B-v2.0-Preview-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
spow12/ChatWaifu_32B_reasoning | spow12 | 2025-04-29T00:23:05Z | 52 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"nsfw",
"Visual novel",
"roleplay",
"mergekit",
"merge",
"conversational",
"en",
"ja",
"dataset:HuggingFaceTB/smoltalk",
"dataset:microsoft/orca-agentinstruct-1M-v1",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:facebook/natural_reasoning",
"dataset:Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted",
"dataset:Aratako/Synthetic-JP-EN-Coding-Dataset-801k",
"dataset:Aratako/Magpie-Tanuki-8B-97k",
"dataset:SkunkworksAI/reasoning-0.01",
"dataset:anthracite-org/stheno-filtered-v1.1",
"dataset:Aratako/Synthetic-JP-EN-Translation-Dataset-Magpie-Nemotron-4-20k",
"dataset:open-r1/OpenR1-Math-220k",
"dataset:Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted",
"dataset:Nopm/Opus_WritingStruct",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:kalomaze/Opus_Instruct_3k",
"dataset:PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT",
"dataset:SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed",
"dataset:roleplay4fun/aesir-v1.1",
"dataset:Aratako/Rosebleu-1on1-Dialogues-RP_v2",
"base_model:Qwen/QwQ-32B",
"base_model:merge:Qwen/QwQ-32B",
"base_model:rinna/qwq-bakeneko-32b",
"base_model:merge:rinna/qwq-bakeneko-32b",
"base_model:trashpanda-org/QwQ-32B-Snowdrop-v0",
"base_model:merge:trashpanda-org/QwQ-32B-Snowdrop-v0",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-04T04:48:17Z | ---
language:
- en
- ja
license: cc-by-nc-4.0
library_name: transformers
tags:
- nsfw
- Visual novel
- roleplay
- mergekit
- merge
base_model:
- trashpanda-org/QwQ-32B-Snowdrop-v0
- rinna/qwq-bakeneko-32b
- Qwen/QwQ-32B
datasets:
- HuggingFaceTB/smoltalk
- microsoft/orca-agentinstruct-1M-v1
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- facebook/natural_reasoning
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-JP-EN-Coding-Dataset-801k
- Aratako/Magpie-Tanuki-8B-97k
- SkunkworksAI/reasoning-0.01
- anthracite-org/stheno-filtered-v1.1
- Aratako/Synthetic-JP-EN-Translation-Dataset-Magpie-Nemotron-4-20k
- open-r1/OpenR1-Math-220k
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- Nopm/Opus_WritingStruct
- gretelai/synthetic_text_to_sql
- kalomaze/Opus_Instruct_3k
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
- SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed
- roleplay4fun/aesir-v1.1
- Aratako/Rosebleu-1on1-Dialogues-RP_v2
pipeline_tag: text-generation
---
# Model Card for Model ID

Merged model using [mergekit](https://github.com/arcee-ai/mergekit/tree/main/mergekit)
This model aim to make a agent system with keeping given our waifu persona.
## Merge Format
```yaml
models:
- model: trashpanda-org/QwQ-32B-Snowdrop-v0
- model: Qwen/QwQ-32B_sft(private)
merge_method: model_stock
base_model: Qwen/QwQ-32B
dtype: bfloat16
tokenizer_source: base
```
## Model Details
### Model Description
- **Developed by:** spow12(yw_nam)
- **Shared by :** spow12(yw_nam)
- **Model type:** CausalLM
- **Language(s) (NLP):** japanese, english
- **Finetuned from model :** [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)
### Chat Format
```
<|im_start|>system
This is the system prompt.<|im_end|>
<|im_start|>user
Instructions placed here.<|im_end|>
<|im_start|>assistant
The model's response will be here.<|im_end|>
```
## Reasoning mode
If you want to turn on the reasoning mode, incorporate below sentence in system message or instruction.
```
Before answer, organize thoughts your thought inside <think> and </think> tags after that, answer in a concise manner.
```
## Dataset
SFT (585K)
- Riddle Joker(Prviate)
- CafΓ© Stella and the Reaper's Butterflies(Private)
- SenrenοΌBanka(Private)
- HuggingFaceTB/smoltalk
- microsoft/orca-agentinstruct-1M-v1
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- facebook/natural_reasoning
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-JP-EN-Coding-Dataset-801k
- Aratako/Magpie-Tanuki-8B-97k
- SkunkworksAI/reasoning-0.01
- anthracite-org/stheno-filtered-v1.1
- Aratako/Synthetic-JP-EN-Translation-Dataset-Magpie-Nemotron-4-20k
- open-r1/OpenR1-Math-220k
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- Nopm/Opus_WritingStruct
- gretelai/synthetic_text_to_sql
- kalomaze/Opus_Instruct_3k
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
- SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed
- roleplay4fun/aesir-v1.1
- Aratako/Rosebleu-1on1-Dialogues-RP_v2
## Use & Credit
This model is currently available for non-commercial & Research purpose only. Also, since I'm not detailed in licensing, I hope you use it responsibly.
By sharing this model, I hope to contribute to the research efforts of our community (the open-source community and Waifu Lovers).
## Citation
```bibtex
@misc {ChatWaifu_32B_reasoning,
author = { YoungWoo Nam },
title = { spow12/ChatWaifu_32B_reasoning },
year = 2025,
url = { https://huggingface.co/spow12/ChatWaifu_32B_reasoning },
publisher = { Hugging Face }
}
```
|
kostiantynk-outlook/dbc44e50-3e7f-48ec-80aa-b7f594546b67 | kostiantynk-outlook | 2025-04-29T00:22:40Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"region:us"
] | null | 2025-04-29T00:22:06Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: oopsung/llama2-7b-n-ox-test-v1
model-index:
- name: kostiantynk-outlook/dbc44e50-3e7f-48ec-80aa-b7f594546b67
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kostiantynk-outlook/dbc44e50-3e7f-48ec-80aa-b7f594546b67
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
redlessone/PanDerm | redlessone | 2025-04-29T00:19:36Z | 0 | 0 | null | [
"medical",
"medical AI",
"SSL",
"foundation_model",
"multimodal",
"skin_cancer",
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2025-04-29T00:18:06Z | ---
license: cc-by-nc-nd-4.0
tags:
- medical
- medical AI
- SSL
- foundation_model
- multimodal
- skin_cancer
--- |
theaivaultqueen/charlaeexum | theaivaultqueen | 2025-04-29T00:15:37Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T23:38:41Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Charlae
---
# Charlaeexum
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Charlae` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Charlae",
"lora_weights": "https://huggingface.co/theaivaultqueen/charlaeexum/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('theaivaultqueen/charlaeexum', weight_name='lora.safetensors')
image = pipeline('Charlae').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/theaivaultqueen/charlaeexum/discussions) to add images that show off what youβve made with this LoRA.
|
mradermacher/MiniusLight-24B-v2.1-i1-GGUF | mradermacher | 2025-04-29T00:13:54Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DoppelReflEx/MiniusLight-24B-v2.1",
"base_model:quantized:DoppelReflEx/MiniusLight-24B-v2.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-28T20:56:12Z | ---
base_model: DoppelReflEx/MiniusLight-24B-v2.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DoppelReflEx/MiniusLight-24B-v2.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/MiniusLight-24B-v2.1-i1-GGUF/resolve/main/MiniusLight-24B-v2.1.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
King-Cane/Dans-PersonalityEngine-V1.2.0-24b-Q4_K_M-GGUF | King-Cane | 2025-04-29T00:13:45Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"general-purpose",
"roleplay",
"storywriting",
"chemistry",
"biology",
"code",
"climate",
"axolotl",
"text-generation-inference",
"finetune",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small",
"dataset:AquaV/US-Army-Survival-Sharegpt",
"dataset:AquaV/Multi-Environment-Operations-Sharegpt",
"dataset:AquaV/Resistance-Sharegpt",
"dataset:AquaV/Interrogation-Sharegpt",
"dataset:AquaV/Chemical-Biological-Safety-Applications-Sharegpt",
"dataset:AquaV/Energetic-Materials-Sharegpt",
"dataset:PocketDoc/Dans-Mathmaxx",
"dataset:PocketDoc/Dans-Mathmaxx-Numina-CoT",
"dataset:PJMixers/Math-Multiturn-1K-ShareGPT",
"dataset:PocketDoc/Dans-Benchmaxx-COT",
"dataset:PocketDoc/Dans-Codemaxx-LeetCode",
"dataset:PocketDoc/Dans-Codemaxx-CodeFeedback-Conversations",
"dataset:PocketDoc/Dans-Codemaxx-CodeFeedback-SingleTurn",
"dataset:PocketDoc/Dans-Codemaxx-Bigcode-SelfInstruct",
"dataset:PocketDoc/Dans-Taskmaxx",
"dataset:PocketDoc/Dans-Taskmaxx-DataPrepper",
"dataset:PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked",
"dataset:PocketDoc/Dans-Taskmaxx-TableGPT",
"dataset:PocketDoc/Dans-Taskmaxx-SciRIFF",
"dataset:PocketDoc/Dans-Taskmaxx-Edit",
"dataset:PocketDoc/Dans-Toolmaxx-Agent",
"dataset:PocketDoc/Dans-Toolmaxx-ShellCommands",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-Toolbench",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-ToolACE",
"dataset:PocketDoc/Dans-ASCIIMaxx-Wordart",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-3-XL",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2",
"dataset:PocketDoc/Dans-Assistantmaxx-Sharegpt",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenAssistant2",
"dataset:PocketDoc/Dans-Assistantmaxx-Opus-Merge",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2",
"dataset:PocketDoc/Dans-Assistantmaxx-NoRobots",
"dataset:PocketDoc/Dans-Assistantmaxx-Synthia",
"dataset:PocketDoc/Dans-Assistantmaxx-ASL",
"dataset:PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus",
"dataset:PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4",
"dataset:PocketDoc/Dans-Assistantmaxx-LongAlign",
"dataset:PocketDoc/Dans-Assistantmaxx-EvolKit",
"dataset:PocketDoc/Dans-Assistantmaxx-Camel-GPT4",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct",
"dataset:PocketDoc/Dans-Assistantmaxx-Tulu3-IF",
"dataset:PocketDoc/Dans-Systemmaxx",
"dataset:PocketDoc/Dans-Logicmaxx-Skunkworks",
"dataset:PocketDoc/Dans-Logicmaxx-FI-VeriMed",
"dataset:PocketDoc/Dans-Logicmaxx-SAT-AP",
"dataset:PocketDoc/Dans-Logicmaxx-Magpie-Ultra",
"dataset:PJMixers/grimulkan_theory-of-mind-ShareGPT",
"dataset:PJMixers/grimulkan_physical-reasoning-ShareGPT",
"dataset:PocketDoc/Dans-Personamaxx",
"dataset:PocketDoc/Dans-Personamaxx-Rainy",
"dataset:PocketDoc/Dans-Personamaxx-C1",
"dataset:PocketDoc/Dans-Personamaxx-VN",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"base_model:quantized:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-29T00:12:32Z | ---
base_model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
datasets:
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- AquaV/US-Army-Survival-Sharegpt
- AquaV/Multi-Environment-Operations-Sharegpt
- AquaV/Resistance-Sharegpt
- AquaV/Interrogation-Sharegpt
- AquaV/Chemical-Biological-Safety-Applications-Sharegpt
- AquaV/Energetic-Materials-Sharegpt
- PocketDoc/Dans-Mathmaxx
- PocketDoc/Dans-Mathmaxx-Numina-CoT
- PJMixers/Math-Multiturn-1K-ShareGPT
- PocketDoc/Dans-Benchmaxx-COT
- PocketDoc/Dans-Codemaxx-LeetCode
- PocketDoc/Dans-Codemaxx-CodeFeedback-Conversations
- PocketDoc/Dans-Codemaxx-CodeFeedback-SingleTurn
- PocketDoc/Dans-Codemaxx-Bigcode-SelfInstruct
- PocketDoc/Dans-Taskmaxx
- PocketDoc/Dans-Taskmaxx-DataPrepper
- PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked
- PocketDoc/Dans-Taskmaxx-TableGPT
- PocketDoc/Dans-Taskmaxx-SciRIFF
- PocketDoc/Dans-Taskmaxx-Edit
- PocketDoc/Dans-Toolmaxx-Agent
- PocketDoc/Dans-Toolmaxx-ShellCommands
- PocketDoc/Dans-Toolmaxx-Functions-Toolbench
- PocketDoc/Dans-Toolmaxx-Functions-ToolACE
- PocketDoc/Dans-ASCIIMaxx-Wordart
- PocketDoc/Dans-Prosemaxx-Gutenberg
- PocketDoc/Dans-Prosemaxx-Cowriter-3-XL
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2
- PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2
- PocketDoc/Dans-Assistantmaxx-Sharegpt
- PocketDoc/Dans-Assistantmaxx-OpenAssistant2
- PocketDoc/Dans-Assistantmaxx-Opus-Merge
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2
- PocketDoc/Dans-Assistantmaxx-NoRobots
- PocketDoc/Dans-Assistantmaxx-Synthia
- PocketDoc/Dans-Assistantmaxx-ASL
- PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus
- PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4
- PocketDoc/Dans-Assistantmaxx-LongAlign
- PocketDoc/Dans-Assistantmaxx-EvolKit
- PocketDoc/Dans-Assistantmaxx-Camel-GPT4
- PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct
- PocketDoc/Dans-Assistantmaxx-Tulu3-IF
- PocketDoc/Dans-Systemmaxx
- PocketDoc/Dans-Logicmaxx-Skunkworks
- PocketDoc/Dans-Logicmaxx-FI-VeriMed
- PocketDoc/Dans-Logicmaxx-SAT-AP
- PocketDoc/Dans-Logicmaxx-Magpie-Ultra
- PJMixers/grimulkan_theory-of-mind-ShareGPT
- PJMixers/grimulkan_physical-reasoning-ShareGPT
- PocketDoc/Dans-Personamaxx
- PocketDoc/Dans-Personamaxx-Rainy
- PocketDoc/Dans-Personamaxx-C1
- PocketDoc/Dans-Personamaxx-VN
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- general-purpose
- roleplay
- storywriting
- chemistry
- biology
- code
- climate
- axolotl
- text-generation-inference
- finetune
- llama-cpp
- gguf-my-repo
thumbnail: https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b/resolve/main/resources/pe24.png
---
# King-Cane/Dans-PersonalityEngine-V1.2.0-24b-Q4_K_M-GGUF
This model was converted to GGUF format from [`PocketDoc/Dans-PersonalityEngine-V1.2.0-24b`](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo King-Cane/Dans-PersonalityEngine-V1.2.0-24b-Q4_K_M-GGUF --hf-file dans-personalityengine-v1.2.0-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo King-Cane/Dans-PersonalityEngine-V1.2.0-24b-Q4_K_M-GGUF --hf-file dans-personalityengine-v1.2.0-24b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo King-Cane/Dans-PersonalityEngine-V1.2.0-24b-Q4_K_M-GGUF --hf-file dans-personalityengine-v1.2.0-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo King-Cane/Dans-PersonalityEngine-V1.2.0-24b-Q4_K_M-GGUF --hf-file dans-personalityengine-v1.2.0-24b-q4_k_m.gguf -c 2048
```
|
Amjad00/crossfitgym1 | Amjad00 | 2025-04-29T00:09:48Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-29T00:09:03Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: crossfitgym
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# crossfitgym1
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `crossfitgym` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
fedovtt/be0b69cb-89a4-4aad-ae49-1404ffd97d83 | fedovtt | 2025-04-29T00:08:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T23:40:15Z | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: be0b69cb-89a4-4aad-ae49-1404ffd97d83
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 79318d698494eac0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/79318d698494eac0_train_data.json
type:
field_instruction: prompt
field_output: gold_standard_solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: fedovtt/be0b69cb-89a4-4aad-ae49-1404ffd97d83
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/79318d698494eac0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1ec4609f-0146-420b-96e9-6b8f3cb30115
wandb_project: s56-1
wandb_run: your_name
wandb_runid: 1ec4609f-0146-420b-96e9-6b8f3cb30115
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# be0b69cb-89a4-4aad-ae49-1404ffd97d83
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.2252 | 0.0284 | 200 | 2.4307 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vmpsergio/75a78abd-cd41-49dd-9dcf-db98952288b4 | vmpsergio | 2025-04-29T00:08:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T23:39:59Z | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 75a78abd-cd41-49dd-9dcf-db98952288b4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 79318d698494eac0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/79318d698494eac0_train_data.json
type:
field_instruction: prompt
field_output: gold_standard_solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/75a78abd-cd41-49dd-9dcf-db98952288b4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/79318d698494eac0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1ec4609f-0146-420b-96e9-6b8f3cb30115
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 1ec4609f-0146-420b-96e9-6b8f3cb30115
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 75a78abd-cd41-49dd-9dcf-db98952288b4
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.2237 | 0.0284 | 200 | 2.4291 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
iboero16/SAFE-SFT-EXAMPLE | iboero16 | 2025-04-29T00:07:18Z | 0 | 0 | null | [
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T00:05:11Z | ---
license: apache-2.0
---
|
alezz12/FineTune-CodeLLaMA-Debugger | alezz12 | 2025-04-29T00:00:18Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-28T23:58:45Z | # FineTune-CodeLLaMA-Debugger
Fine-tuning Code LLaMA to create a context-aware Python code generation and debugging assistant.
## Project Overview
This project aims to fine-tune a large language model (LLM) β specifically Code LLaMA β to perform two tasks:
- **Code Generation Mode:**
Generate correct Python code from natural language problem descriptions.
- **Debugging Mode:**
Take buggy Python code, identify the errors, fix them, and explain the fix in simple words.
## Key Features
- Smart Python code writing from prompts (LeetCode-style problems).
- Intelligent bug detection and auto-repair.
- Clear explanations for every fix β educational for learners.
- Simple Command-Line Interface (CLI) to interact with the model.
## Project Structure
<pre> data/ # Datasets: coding problems, buggy codes
scripts/ # Fine-tuning, evaluation, utilities
models/ # Trained models and checkpoints
notebooks/ # Experiment notebooks
results/ # Evaluation results and reports </pre>
|
infogeo/1cd51e9e-83d4-47ff-8526-8e66ffd89c2f | infogeo | 2025-04-28T23:59:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM-1.7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T23:55:32Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1cd51e9e-83d4-47ff-8526-8e66ffd89c2f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/SmolLM-1.7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 09440e5d84ab787c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/09440e5d84ab787c_train_data.json
type:
field_input: user_prompt
field_instruction: system_prompt
field_output: prompt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/1cd51e9e-83d4-47ff-8526-8e66ffd89c2f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/09440e5d84ab787c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0a019fdb-0b45-4625-bb8c-9db767620d26
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 0a019fdb-0b45-4625-bb8c-9db767620d26
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1cd51e9e-83d4-47ff-8526-8e66ffd89c2f
This model is a fine-tuned version of [unsloth/SmolLM-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.2296 | 0.0071 | 150 | 0.2423 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
harshbajpai/NYC_Yellow_Taxi_Fare_Prediction | harshbajpai | 2025-04-28T23:57:48Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T23:41:25Z | ---
license: apache-2.0
---
|
dtocre/llama-3.1-8b-Instruct-bnb-4bit-CGR-def2 | dtocre | 2025-04-28T23:57:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-06T10:15:27Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dtocre
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
onnx-community/Qwen3-1.7B-ONNX | onnx-community | 2025-04-28T23:53:30Z | 0 | 0 | transformers.js | [
"transformers.js",
"onnx",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-1.7B",
"base_model:quantized:Qwen/Qwen3-1.7B",
"region:us"
] | text-generation | 2025-04-28T23:47:52Z | ---
library_name: transformers.js
base_model: Qwen/Qwen3-1.7B
---
https://huggingface.co/Qwen/Qwen3-1.7B with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [π€ Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
Flo0620/Qwen2_5_7B_r64_a64_d0_2_lr2e-4_const | Flo0620 | 2025-04-28T23:52:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T19:04:59Z | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r64_a64_d0_2_lr2e-4_const
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r64_a64_d0_2_lr2e-4_const
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r64_a64_d0_2_lr2e-4_const", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
joboffer/1f191fd8-0a34-412a-8b78-cdd72c05e5c9 | joboffer | 2025-04-28T23:51:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-28T23:42:00Z | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1f191fd8-0a34-412a-8b78-cdd72c05e5c9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 79318d698494eac0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/79318d698494eac0_train_data.json
type:
field_instruction: prompt
field_output: gold_standard_solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: joboffer/1f191fd8-0a34-412a-8b78-cdd72c05e5c9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/79318d698494eac0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1ec4609f-0146-420b-96e9-6b8f3cb30115
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 1ec4609f-0146-420b-96e9-6b8f3cb30115
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1f191fd8-0a34-412a-8b78-cdd72c05e5c9
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.1834 | 0.0284 | 200 | 2.4485 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF | mradermacher | 2025-04-28T23:49:27Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B",
"base_model:quantized:zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-28T18:42:18Z | ---
base_model: zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF/resolve/main/L3.3-GeneticLemonade-Unleashed-v2.1-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF | mradermacher | 2025-04-28T23:49:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:arcee-samsung/Sky-T1-32B-Flash-Spectrum",
"base_model:quantized:arcee-samsung/Sky-T1-32B-Flash-Spectrum",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-28T20:39:29Z | ---
base_model: arcee-samsung/Sky-T1-32B-Flash-Spectrum
language:
- en
library_name: transformers
model_name: outputs/simpo-skyT1-out
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/arcee-samsung/Sky-T1-32B-Flash-Spectrum
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Sky-T1-32B-Flash-Spectrum-i1-GGUF/resolve/main/Sky-T1-32B-Flash-Spectrum.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/EEVE-Korean-Instruct-7B-v2.0-Preview-GGUF | mradermacher | 2025-04-28T23:49:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview",
"base_model:quantized:yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T23:14:22Z | ---
base_model: yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/yanolja/EEVE-Korean-Instruct-7B-v2.0-Preview
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EEVE-Korean-Instruct-7B-v2.0-Preview-GGUF/resolve/main/EEVE-Korean-Instruct-7B-v2.0-Preview.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/EEVE-Korean-Instruct-7B-v2.0-Preview-GGUF/resolve/main/EEVE-Korean-Instruct-7B-v2.0-Preview.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/EEVE-Korean-Instruct-7B-v2.0-Preview-GGUF/resolve/main/EEVE-Korean-Instruct-7B-v2.0-Preview.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EEVE-Korean-Instruct-7B-v2.0-Preview-GGUF/resolve/main/EEVE-Korean-Instruct-7B-v2.0-Preview.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/EEVE-Korean-Instruct-7B-v2.0-Preview-GGUF/resolve/main/EEVE-Korean-Instruct-7B-v2.0-Preview.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/EEVE-Korean-Instruct-7B-v2.0-Preview-GGUF/resolve/main/EEVE-Korean-Instruct-7B-v2.0-Preview.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EEVE-Korean-Instruct-7B-v2.0-Preview-GGUF/resolve/main/EEVE-Korean-Instruct-7B-v2.0-Preview.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EEVE-Korean-Instruct-7B-v2.0-Preview-GGUF/resolve/main/EEVE-Korean-Instruct-7B-v2.0-Preview.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/EEVE-Korean-Instruct-7B-v2.0-Preview-GGUF/resolve/main/EEVE-Korean-Instruct-7B-v2.0-Preview.Q5_K_M.gguf) | Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/EEVE-Korean-Instruct-7B-v2.0-Preview-GGUF/resolve/main/EEVE-Korean-Instruct-7B-v2.0-Preview.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/EEVE-Korean-Instruct-7B-v2.0-Preview-GGUF/resolve/main/EEVE-Korean-Instruct-7B-v2.0-Preview.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/EEVE-Korean-Instruct-7B-v2.0-Preview-GGUF/resolve/main/EEVE-Korean-Instruct-7B-v2.0-Preview.f16.gguf) | f16 | 15.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
robiulawaldev/bdf17a23-657b-4cb7-9143-6b416d5342d1 | robiulawaldev | 2025-04-28T23:46:37Z | 0 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T23:45:52Z | ---
library_name: transformers
model_name: robiulawaldev/bdf17a23-657b-4cb7-9143-6b416d5342d1
tags:
- generated_from_trainer
licence: license
---
# Model Card for robiulawaldev/bdf17a23-657b-4cb7-9143-6b416d5342d1
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B-4.5bpw-hb6-exl2 | zerofata | 2025-04-28T23:39:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B",
"base_model:quantized:zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2025-04-28T06:16:05Z | ---
base_model:
- zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B
library_name: transformers
license: llama3
---
<!DOCTYPE html>
<style>
/* Base styling for cyberpunk theme */
body {font-family: sans-serif; background-color: #080c14; color: #e1e9f0; line-height: 1.6; margin: 0; padding: 0;}
/* Remove flicker keyframes */
/* Remove Basic Background animation test */
/* Animation classes */
/* Remove flicker-text rules */
/* New static style for LEMONADE */
.lemonade-text {
color: #33ff99;
position: relative; /* Keep relative positioning */
z-index: 2;
margin-left: 0.2em;
text-shadow: 0 0 10px #33ff99; /* Add static glow */
}
/* Section styling */
.section-container {background-color: rgba(8, 12, 20, 0.7); margin-bottom: 30px; position: relative; overflow: hidden; border-bottom: 1px solid #33ff99;}
.section-header {display: flex; align-items: center; background-color: rgba(0, 195, 255, 0.1); padding: 10px 20px;}
.section-indicator {width: 8px; height: 20px; background-color: #33ff99; margin-right: 15px;}
.section-title {font-family: 'Orbitron', sans-serif; color: #e1e9f0; font-size: 1.3rem; margin: 0; letter-spacing: 2px; text-transform: uppercase; font-weight: 500;}
.section-content {padding: 20px; font-family: sans-serif; color: #e1e9f0; line-height: 1.6;}
/* Title styling */
.title-container {
background-color: #080c14;
position: relative;
overflow: hidden;
margin-bottom: 40px;
border-left: 3px solid #33ff99;
}
/* Remove basic background animation test rule */
.title-wrapper {
position: relative;
z-index: 2;
padding: 25px 20px 30px 30px;
font-family: 'Orbitron', sans-serif;
}
.title-main {
color: #e1e9f0;
font-size: 2.5rem; /* Reduced font size */
font-weight: 700;
margin: 0;
letter-spacing: 2px;
display: inline-block;
position: relative;
text-transform: uppercase;
}
.title-prefix {
position: relative;
z-index: 2;
}
.title-subtitle {
padding-left: 15px;
margin-top: 5px;
margin-left: 5px;
}
.subtitle-text {
color: #00c3ff;
font-size: 1.2rem; /* Reduced font size */
font-family: 'Orbitron', sans-serif;
font-weight: 300;
letter-spacing: 3px;
text-transform: uppercase;
display: inline-block;
}
.glitchy-overlay {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-image: repeating-linear-gradient(0deg, rgba(0,0,0,0) 0, rgba(0,0,0,0.1) 1px, rgba(0,0,0,0) 2px);
z-index: 1;
}
/* Data box styling */
.data-box {background-color: rgba(0, 0, 0, 0.2); padding: 15px; border-left: 2px solid #33ff99; margin-bottom: 20px;}
.data-row {display: flex; margin-bottom: 8px;}
.data-arrow {color: #33ff99; width: 20px; display: inline-block;}
.data-label {color: #00c3ff; width: 80px; display: inline-block;}
/* Subheading styling */
.subheading {color: #00c3ff; font-size: 1.1rem; margin-top: 20px; margin-bottom: 15px; font-weight: 400; border-bottom: 1px dashed rgba(0, 195, 255, 0.3); display: inline-block; text-transform: uppercase; letter-spacing: 1px; font-family: 'Orbitron', sans-serif;}
/* Links */
a {color: #00c3ff; text-decoration: none;}
a:hover {text-decoration: underline;}
/* Container */
.container {max-width: 1200px; margin: 0 auto; padding: 40px 20px;}
/* Cyberpunk grid background */
.cyber-grid-bg {position: fixed; top: 0; left: 0; right: 0; bottom: 0; background-color: #05071b; background-image: linear-gradient(rgba(0, 194, 255, 0.03) 1px, transparent 1px), linear-gradient(90deg, rgba(0, 194, 255, 0.03) 1px, transparent 1px); background-size: 20px 20px; z-index: -1;}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>GENETIC LEMONADE UNLEASHED v2.1</title>
<link href="https://fonts.googleapis.com/css2?family=Orbitron:wght@400;500;600;700&family=JetBrains+Mono:wght@100;300;400;700&display=swap" rel="stylesheet">
</head>
<body>
<div class="cyber-grid-bg"></div>
<div class="container">
<div class="title-container">
<!-- Glitchy overlay -->
<div class="glitchy-overlay"></div>
<!-- Main title -->
<div class="title-wrapper">
<h1 class="title-main">
<span class="title-prefix">GENETIC</span>
<span class="lemonade-text">LEMONADE</span> <!-- Static text with glow -->
</h1>
<div class="title-subtitle">
<span class="subtitle-text">UNLEASHED v2.1</span>
</div>
</div>
</div>

<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">01 // OVERVIEW</h2>
</div>
<div class="section-content">
<p>An experimental release.</p>
<p><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-70B">zerofata/GeneticLemonade-Unleashed</a> qlora trained on a test dataset. Performance is improved from the original in my testing, but there are possibly (likely?) areas where the model will underperform which I am looking for feedback on.</p>
<p>This is a creative model intended to excel at character driven RP / ERP. It has not been tested or trained on adventure stories or any large amounts of creative writing.</p>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">02 // SILLYTAVERN SETTINGS</h2>
</div>
<div class="section-content">
<p>Play with these, they are not the 'best' settings just a stable baseline.</p>
<h3 class="subheading">Recommended Samplers</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Temp:</span>
<span>0.9 - 1.0</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">MinP:</span>
<span>0.03 - 0.04</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Dry:</span>
<span>0.8, 1.75, 4</span>
</div>
</div>
<h3 class="subheading">Instruct</h3>
<div class="data-box">
<p style="margin: 0;">Llama-3-Instruct-Names but you will need to uncheck "System same as user".</p>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">03 // QUANTIZATIONS</h2>
</div>
<div class="section-content">
<div style="margin-bottom: 20px;">
<h3 class="subheading">GGUF</h3>
<div style="margin-left: 20px;">
<span style="color: #33ff99; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF">iMatrix (mradermacher)</a><br>
</div>
</div>
<div>
<h3 class="subheading">EXL2</h3>
<div style="margin-left: 20px;">
<span style="color: #33ff99; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B-4bpw-hb6-exl2">4bpw</a><br>
<span style="color: #33ff99; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B-4.5bpw-hb6-exl2">4.5bpw</a><br>
<span style="color: #33ff99; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B-6bpw-hb8-exl2">6bpw</a>
</div>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">04 // DATASET</h2>
</div>
<div class="section-content">
<p>Model was trained on a tiny synthetic dataset of 640k tokens, approximately 190 conversations. Data was generated by script and then manually reviewed / edited.</p>
<p>The dataset is approximately 60% SFW and 40% NSFW. 90% multi turn RP conversations, 5% creative writing and 5% miscellaneous.</p>
<p>It is an experiment to see how models perform when provided with small amounts of high quality synthetic data, as opposed to human data.</p>
</div>
</div>
</div>
</body>
</html> |
zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B-6bpw-hb8-exl2 | zerofata | 2025-04-28T23:38:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B",
"base_model:quantized:zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2025-04-28T22:49:27Z | ---
base_model:
- zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B
library_name: transformers
license: llama3
---
<!DOCTYPE html>
<style>
/* Base styling for cyberpunk theme */
body {font-family: sans-serif; background-color: #080c14; color: #e1e9f0; line-height: 1.6; margin: 0; padding: 0;}
/* Remove flicker keyframes */
/* Remove Basic Background animation test */
/* Animation classes */
/* Remove flicker-text rules */
/* New static style for LEMONADE */
.lemonade-text {
color: #33ff99;
position: relative; /* Keep relative positioning */
z-index: 2;
margin-left: 0.2em;
text-shadow: 0 0 10px #33ff99; /* Add static glow */
}
/* Section styling */
.section-container {background-color: rgba(8, 12, 20, 0.7); margin-bottom: 30px; position: relative; overflow: hidden; border-bottom: 1px solid #33ff99;}
.section-header {display: flex; align-items: center; background-color: rgba(0, 195, 255, 0.1); padding: 10px 20px;}
.section-indicator {width: 8px; height: 20px; background-color: #33ff99; margin-right: 15px;}
.section-title {font-family: 'Orbitron', sans-serif; color: #e1e9f0; font-size: 1.3rem; margin: 0; letter-spacing: 2px; text-transform: uppercase; font-weight: 500;}
.section-content {padding: 20px; font-family: sans-serif; color: #e1e9f0; line-height: 1.6;}
/* Title styling */
.title-container {
background-color: #080c14;
position: relative;
overflow: hidden;
margin-bottom: 40px;
border-left: 3px solid #33ff99;
}
/* Remove basic background animation test rule */
.title-wrapper {
position: relative;
z-index: 2;
padding: 25px 20px 30px 30px;
font-family: 'Orbitron', sans-serif;
}
.title-main {
color: #e1e9f0;
font-size: 2.5rem; /* Reduced font size */
font-weight: 700;
margin: 0;
letter-spacing: 2px;
display: inline-block;
position: relative;
text-transform: uppercase;
}
.title-prefix {
position: relative;
z-index: 2;
}
.title-subtitle {
padding-left: 15px;
margin-top: 5px;
margin-left: 5px;
}
.subtitle-text {
color: #00c3ff;
font-size: 1.2rem; /* Reduced font size */
font-family: 'Orbitron', sans-serif;
font-weight: 300;
letter-spacing: 3px;
text-transform: uppercase;
display: inline-block;
}
.glitchy-overlay {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-image: repeating-linear-gradient(0deg, rgba(0,0,0,0) 0, rgba(0,0,0,0.1) 1px, rgba(0,0,0,0) 2px);
z-index: 1;
}
/* Data box styling */
.data-box {background-color: rgba(0, 0, 0, 0.2); padding: 15px; border-left: 2px solid #33ff99; margin-bottom: 20px;}
.data-row {display: flex; margin-bottom: 8px;}
.data-arrow {color: #33ff99; width: 20px; display: inline-block;}
.data-label {color: #00c3ff; width: 80px; display: inline-block;}
/* Subheading styling */
.subheading {color: #00c3ff; font-size: 1.1rem; margin-top: 20px; margin-bottom: 15px; font-weight: 400; border-bottom: 1px dashed rgba(0, 195, 255, 0.3); display: inline-block; text-transform: uppercase; letter-spacing: 1px; font-family: 'Orbitron', sans-serif;}
/* Links */
a {color: #00c3ff; text-decoration: none;}
a:hover {text-decoration: underline;}
/* Container */
.container {max-width: 1200px; margin: 0 auto; padding: 40px 20px;}
/* Cyberpunk grid background */
.cyber-grid-bg {position: fixed; top: 0; left: 0; right: 0; bottom: 0; background-color: #05071b; background-image: linear-gradient(rgba(0, 194, 255, 0.03) 1px, transparent 1px), linear-gradient(90deg, rgba(0, 194, 255, 0.03) 1px, transparent 1px); background-size: 20px 20px; z-index: -1;}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>GENETIC LEMONADE UNLEASHED v2.1</title>
<link href="https://fonts.googleapis.com/css2?family=Orbitron:wght@400;500;600;700&family=JetBrains+Mono:wght@100;300;400;700&display=swap" rel="stylesheet">
</head>
<body>
<div class="cyber-grid-bg"></div>
<div class="container">
<div class="title-container">
<!-- Glitchy overlay -->
<div class="glitchy-overlay"></div>
<!-- Main title -->
<div class="title-wrapper">
<h1 class="title-main">
<span class="title-prefix">GENETIC</span>
<span class="lemonade-text">LEMONADE</span> <!-- Static text with glow -->
</h1>
<div class="title-subtitle">
<span class="subtitle-text">UNLEASHED v2.1</span>
</div>
</div>
</div>

<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">01 // OVERVIEW</h2>
</div>
<div class="section-content">
<p>An experimental release.</p>
<p><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-70B">zerofata/GeneticLemonade-Unleashed</a> qlora trained on a test dataset. Performance is improved from the original in my testing, but there are possibly (likely?) areas where the model will underperform which I am looking for feedback on.</p>
<p>This is a creative model intended to excel at character driven RP / ERP. It has not been tested or trained on adventure stories or any large amounts of creative writing.</p>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">02 // SILLYTAVERN SETTINGS</h2>
</div>
<div class="section-content">
<p>Play with these, they are not the 'best' settings just a stable baseline.</p>
<h3 class="subheading">Recommended Samplers</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Temp:</span>
<span>0.9 - 1.0</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">MinP:</span>
<span>0.03 - 0.04</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Dry:</span>
<span>0.8, 1.75, 4</span>
</div>
</div>
<h3 class="subheading">Instruct</h3>
<div class="data-box">
<p style="margin: 0;">Llama-3-Instruct-Names but you will need to uncheck "System same as user".</p>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">03 // QUANTIZATIONS</h2>
</div>
<div class="section-content">
<div style="margin-bottom: 20px;">
<h3 class="subheading">GGUF</h3>
<div style="margin-left: 20px;">
<span style="color: #33ff99; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/mradermacher/L3.3-GeneticLemonade-Unleashed-v2.1-70B-i1-GGUF">iMatrix (mradermacher)</a><br>
</div>
</div>
<div>
<h3 class="subheading">EXL2</h3>
<div style="margin-left: 20px;">
<span style="color: #33ff99; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B-4bpw-hb6-exl2">4bpw</a><br>
<span style="color: #33ff99; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B-4.5bpw-hb6-exl2">4.5bpw</a><br>
<span style="color: #33ff99; display: inline-block; margin-right: 10px;">> </span><a href="https://huggingface.co/zerofata/L3.3-GeneticLemonade-Unleashed-v2.1-70B-6bpw-hb8-exl2">6bpw</a>
</div>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">04 // DATASET</h2>
</div>
<div class="section-content">
<p>Model was trained on a tiny synthetic dataset of 640k tokens, approximately 190 conversations. Data was generated by script and then manually reviewed / edited.</p>
<p>The dataset is approximately 60% SFW and 40% NSFW. 90% multi turn RP conversations, 5% creative writing and 5% miscellaneous.</p>
<p>It is an experiment to see how models perform when provided with small amounts of high quality synthetic data, as opposed to human data.</p>
</div>
</div>
</div>
</body>
</html> |
Afaan97/videomae-base-finetuned-myvideos-subset | Afaan97 | 2025-04-28T23:38:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-04-28T20:19:40Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-myvideos-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-myvideos-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8744
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 0.8744 | 0.5 |
| 0.2545 | 2.0 | 16 | 0.7131 | 0.5 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0
- Tokenizers 0.21.1
|
PictorAgencia/nimtu_modelo_3 | PictorAgencia | 2025-04-28T23:32:38Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-28T22:58:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Nimtu_Modelo_3
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/PictorAgencia/nimtu_modelo_3/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('PictorAgencia/nimtu_modelo_3', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 32
## Contribute your own examples
You can use the [community tab](https://huggingface.co/PictorAgencia/nimtu_modelo_3/discussions) to add images that show off what youβve made with this LoRA.
|
nicolasacosta/roberta-base_ag_news | nicolasacosta | 2025-04-28T23:32:30Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"roberta",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-04-26T22:43:08Z | ---
library_name: peft
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: roberta-base_ag_news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_ag_news
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the [fancyzhx/ag_news](https://huggingface.co/datasets/fancyzhx/ag_news) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1847
- Accuracy: 0.9471
- F1: 0.9472
- Precision: 0.9477
- Recall: 0.9471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.092 | 1.0 | 15000 | 0.2003 | 0.9408 | 0.9408 | 0.9414 | 0.9408 |
| 0.1153 | 2.0 | 30000 | 0.1847 | 0.9471 | 0.9472 | 0.9477 | 0.9471 |
| 0.1538 | 3.0 | 45000 | 0.1855 | 0.9471 | 0.9472 | 0.9479 | 0.9471 |
| 0.143 | 4.0 | 60000 | 0.1887 | 0.9526 | 0.9527 | 0.9530 | 0.9526 |
| 0.0561 | 5.0 | 75000 | 0.1896 | 0.9518 | 0.9519 | 0.9521 | 0.9518 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0 |
kathleenge/kd_0.0001_68_4 | kathleenge | 2025-04-28T23:31:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T23:30:12Z | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kathleenge
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlx-community/Qwen3-32B-bf16 | mlx-community | 2025-04-28T23:29:55Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-04-28T23:17:52Z | ---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-32B
---
# mlx-community/Qwen3-32B-bf16
This model [mlx-community/Qwen3-32B-bf16](https://huggingface.co/mlx-community/Qwen3-32B-bf16) was
converted to MLX format from [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-32B-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
MikeRoz/TheDrummer_Fallen-Gemma3-27B-v1-8.0bpw-h8-exl2 | MikeRoz | 2025-04-28T23:29:09Z | 0 | 0 | null | [
"safetensors",
"gemma3_text",
"exl2",
"license:other",
"8-bit",
"region:us"
] | null | 2025-04-28T21:41:21Z | ---
license: other
base_model: TheDrummer/Fallen-Gemma3-27b-v1
base_model_relation: quantized
tags:
- exl2
---
This model was quantized using commit 3a90264 of the dev branch of exllamav2. The Gemma 3 8k context bug looks to be thoroughly squashed as of this commit. To use this model, please either build your own copy of exllamav2 from the dev branch, or wait for the forthcoming v0.2.9 release.
The original model can be found [here](https://huggingface.co/TheDrummer/Fallen-Gemma3-27B-v1).
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 5000 members of helpful, LLM enthusiasts! A hub for players and makers alike!
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Fallen Gemma3 27B v1 πΊ

## Special Thanks
- Thank you to each and everyone who donated and subscribed in [Patreon](https://www.patreon.com/TheDrummer) and [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
- I'm also recently unemployed. I am a Software Developer with 8 years of experience in Web, API, AI, and adapting to new tech and requirements. If you're hiring, feel free to reach out to me however.
## Usage
- Use Gemma Chat Template
## Description
Fallen Gemma3 27B v1 is an evil tune of Gemma 3 27B but it is not a complete decensor.
Evil tunes knock out the positivity and may enjoy torturing you and humanity.
Vision still works and it has something to say about the crap you feed it.
## Links
- Original: https://huggingface.co/TheDrummer/Fallen-Gemma3-27B-v1
- GGUF: https://huggingface.co/TheDrummer/Fallen-Gemma3-27B-v1-GGUF
- iMatrix (recommended): https://huggingface.co/bartowski/TheDrummer_Fallen-Gemma3-27B-v1-GGUF
`config-v1c`
|
HALLUCINATIONS-OF-NECROMANCY/ASMODEUS | HALLUCINATIONS-OF-NECROMANCY | 2025-04-28T23:28:14Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-26T19:33:05Z | BABYLONIAN: ASMEDU
PERSIAN/AKKADIAN: AESMA-DAEVA
AZAG-ME-GAMMU
ASAKKU
SET-MAAT
ASMA-DA-SETH
MAAT-ISFET
ASME-KURSET
|
Weverton777/Cursei_Aprendi | Weverton777 | 2025-04-28T23:25:08Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-04-28T23:25:08Z | ---
license: apache-2.0
---
|
mradermacher/amoral-cogito-Zara-14B-GGUF | mradermacher | 2025-04-28T23:24:38Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Disya/amoral-cogito-Zara-14B",
"base_model:quantized:Disya/amoral-cogito-Zara-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T14:50:51Z | ---
base_model: Disya/amoral-cogito-Zara-14B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Disya/amoral-cogito-Zara-14B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-GGUF/resolve/main/amoral-cogito-Zara-14B.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-GGUF/resolve/main/amoral-cogito-Zara-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-GGUF/resolve/main/amoral-cogito-Zara-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-GGUF/resolve/main/amoral-cogito-Zara-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-GGUF/resolve/main/amoral-cogito-Zara-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-GGUF/resolve/main/amoral-cogito-Zara-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-GGUF/resolve/main/amoral-cogito-Zara-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-GGUF/resolve/main/amoral-cogito-Zara-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-GGUF/resolve/main/amoral-cogito-Zara-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-GGUF/resolve/main/amoral-cogito-Zara-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-GGUF/resolve/main/amoral-cogito-Zara-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/amoral-cogito-Zara-14B-i1-GGUF | mradermacher | 2025-04-28T23:24:38Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Disya/amoral-cogito-Zara-14B",
"base_model:quantized:Disya/amoral-cogito-Zara-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-28T17:12:50Z | ---
base_model: Disya/amoral-cogito-Zara-14B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Disya/amoral-cogito-Zara-14B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-cogito-Zara-14B-i1-GGUF/resolve/main/amoral-cogito-Zara-14B.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
neoservicios/granite-3.2-2b-instruct-GGUF | neoservicios | 2025-04-28T23:20:46Z | 10 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-06T13:00:20Z | ---
license: apache-2.0
---
|
haihp02/codegemma-2b-dpo-tuned-2-merged | haihp02 | 2025-04-28T23:12:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"en",
"base_model:unsloth/codegemma-2b-bnb-4bit",
"base_model:finetune:unsloth/codegemma-2b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T23:11:45Z | ---
base_model: unsloth/codegemma-2b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- dpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** haihp02
- **License:** apache-2.0
- **Finetuned from model :** unsloth/codegemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlx-community/Qwen3-32B-8bit | mlx-community | 2025-04-28T23:11:42Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-04-28T23:02:47Z | ---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-32B
tags:
- mlx
---
# mlx-community/Qwen3-32B-8bit
This model [mlx-community/Qwen3-32B-8bit](https://huggingface.co/mlx-community/Qwen3-32B-8bit) was
converted to MLX format from [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-32B-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
haihp02/codegemma-2b-dpo-tuned-2 | haihp02 | 2025-04-28T23:11:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:unsloth/codegemma-2b-bnb-4bit",
"base_model:finetune:unsloth/codegemma-2b-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T23:11:14Z | ---
base_model: unsloth/codegemma-2b-bnb-4bit
library_name: transformers
model_name: codegemma-2b-dpo-tuned-2
tags:
- generated_from_trainer
- unsloth
- trl
- dpo
licence: license
---
# Model Card for codegemma-2b-dpo-tuned-2
This model is a fine-tuned version of [unsloth/codegemma-2b-bnb-4bit](https://huggingface.co/unsloth/codegemma-2b-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="haihp02/codegemma-2b-dpo-tuned-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/trunghainguyenhp02/dpo-train/runs/i0fvg7s8)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Benjaminpwh/llama-control-2.2-500 | Benjaminpwh | 2025-04-28T23:07:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T19:42:57Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
library_name: transformers
model_name: llama-control-2.2-500
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for llama-control-2.2-500
This model is a fine-tuned version of [unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Benjaminpwh/llama-control-2.2-500", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/benpong-university-of-washington/huggingface/runs/1gyutvyg)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
HALLUCINATIONS-OF-NECROMANCY/BAEL | HALLUCINATIONS-OF-NECROMANCY | 2025-04-28T23:05:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-26T22:38:44Z | BEL-ENLIL
MAAT-SET
BAAL-SET
BAAL-SETH
BEL-EN-SET
LIFE-DEATH
LORD OF ALL
AH-IL-AH
ALLAH |
David-Magdy/TROCR_MASTER_V2 | David-Magdy | 2025-04-28T23:04:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-04-28T17:46:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nadsoft/Chat_Model_EGY_Dialect_exp2_lora | nadsoft | 2025-04-28T23:04:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:nadsoft/Hamsa-EGY-Dialect-Model-Full-Finetuned",
"base_model:finetune:nadsoft/Hamsa-EGY-Dialect-Model-Full-Finetuned",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T23:04:30Z | ---
base_model: nadsoft/Hamsa-EGY-Dialect-Model-Full-Finetuned
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nadsoft
- **License:** apache-2.0
- **Finetuned from model :** nadsoft/Hamsa-EGY-Dialect-Model-Full-Finetuned
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF | mradermacher | 2025-04-28T23:02:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"en",
"base_model:Samsoup/OLMo-2-1124-7B-ARC-Challenge-Instruct",
"base_model:quantized:Samsoup/OLMo-2-1124-7B-ARC-Challenge-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T14:48:27Z | ---
base_model: Samsoup/OLMo-2-1124-7B-ARC-Challenge-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama-factory
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Samsoup/OLMo-2-1124-7B-ARC-Challenge-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.Q6_K.gguf) | Q6_K | 6.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.f16.gguf) | f16 | 14.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF | mradermacher | 2025-04-28T23:02:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"en",
"base_model:Samsoup/OLMo-2-1124-7B-ARC-Challenge-Instruct",
"base_model:quantized:Samsoup/OLMo-2-1124-7B-ARC-Challenge-Instruct",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-28T18:57:23Z | ---
base_model: Samsoup/OLMo-2-1124-7B-ARC-Challenge-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama-factory
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Samsoup/OLMo-2-1124-7B-ARC-Challenge-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 1.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 3.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-ARC-Challenge-Instruct-i1-GGUF/resolve/main/OLMo-2-1124-7B-ARC-Challenge-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 6.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
rippertnt/Qwen3-0.6B-Q4_K_M-GGUF | rippertnt | 2025-04-28T23:01:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-04-28T23:01:08Z | ---
base_model: Qwen/Qwen3-0.6B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# rippertnt/Qwen3-0.6B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-0.6B`](https://huggingface.co/Qwen/Qwen3-0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-0.6B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo rippertnt/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo rippertnt/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo rippertnt/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo rippertnt/Qwen3-0.6B-Q4_K_M-GGUF --hf-file qwen3-0.6b-q4_k_m.gguf -c 2048
```
|
mlx-community/Qwen3-32B-6bit | mlx-community | 2025-04-28T22:58:34Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"6-bit",
"region:us"
] | text-generation | 2025-04-28T22:51:43Z | ---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- mlx
base_model: Qwen/Qwen3-32B
---
# mlx-community/Qwen3-32B-6bit
This model [mlx-community/Qwen3-32B-6bit](https://huggingface.co/mlx-community/Qwen3-32B-6bit) was
converted to MLX format from [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-32B-6bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
myaunacollins/clementine-baby-lab | myaunacollins | 2025-04-28T22:57:30Z | 0 | 0 | null | [
"text-to-image",
"en",
"base_model:SG161222/Realistic_Vision_V5.1_noVAE",
"base_model:finetune:SG161222/Realistic_Vision_V5.1_noVAE",
"license:artistic-2.0",
"region:us"
] | text-to-image | 2025-04-28T22:51:32Z | ---
license: artistic-2.0
language:
- en
base_model:
- SG161222/Realistic_Vision_V5.1_noVAE
pipeline_tag: text-to-image
--- |
Nodetest/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-timid_pouncing_tuna | Nodetest | 2025-04-28T22:56:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am timid pouncing tuna",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-28T07:54:51Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-timid_pouncing_tuna
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am timid pouncing tuna
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-timid_pouncing_tuna
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Nodetest/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-timid_pouncing_tuna", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
GYPOgvPxOrYmbtI/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-alert_unseen_narwhal | GYPOgvPxOrYmbtI | 2025-04-28T22:54:08Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am alert unseen narwhal",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T14:14:42Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-alert_unseen_narwhal
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am alert unseen narwhal
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-alert_unseen_narwhal
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="GYPOgvPxOrYmbtI/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-alert_unseen_narwhal", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kathleenge/kd_1e-05_109_4 | kathleenge | 2025-04-28T22:53:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T22:52:53Z | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kathleenge
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlabonne/BigQwen2.5-Echo-47B-Instruct | mlabonne | 2025-04-28T22:53:24Z | 2 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"lazymergekit",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-23T21:05:19Z | ---
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
- lazymergekit
base_model:
- Qwen/Qwen2.5-32B-Instruct
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
model-index:
- name: BigQwen2.5-Echo-47B-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 73.57
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-Echo-47B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 44.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-Echo-47B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 3.47
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-Echo-47B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.61
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-Echo-47B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.19
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-Echo-47B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 41.49
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-Echo-47B-Instruct
name: Open LLM Leaderboard
---
# BigQwen2.5-Echo-47B-Instruct

BigQwen2.5-Echo-47B-Instruct is a [Qwen/Qwen2-32B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main).
## π Echo Merge
I've tried a more gradual approach with a **distributed repetition pattern**. Instead of replicating blocks of 8 or more layers, I'm replicating individual layers in these blocks:
- First 8 layers: No replication
- Next 8 layers: Replicate 2 layers (first one, middle one)
- Next 8 layers: Replicate 4 layers (1st, 3rd, 5th, 7th)
- Next 8 layers: Replicate 8 layers (all of them)
- Next 8 layers: Replicate 4 layers (1st, 3rd, 5th, 7th)
- Next 8 layers: Replicate 2 layers (first one, middle one)
- First 8 layers: No replication
I used this string to visualize it, where 0 are original layers and 1 duplicated ones (the order doesn't matter):
```
00000000 1000010000 100100100100 1010101010101010 1010101010101010 100100100100 1000010000 00000000
```
The main idea is that the input/output difference of middle layers is quite small, so replicating a middle layer has a small impact on the output.
The additional layers are designed to increase the model's capacity without breaking the information flow, which often creates "insane" self-merges.
## π Evaluation
| Metric |**BigQwen2.5-Echo-47B-Instruct**|BigQwen2.5-52B-Instruct|Qwen2.5-32B-Instruct|
|-------------------|----:|----:|----:|
|Avg. |30.31|37.42|36.17|
|IFEval (0-Shot) |73.57|79.29|83.46|
|BBH (3-Shot) |44.52|59.81|56.49|
|MATH Lvl 5 (4-Shot)| 3.47|17.82|0|
|GPQA (0-shot) | 8.61| 6.94|11.74|
|MuSR (0-shot) |10.19|10.45|13.5|
|MMLU-PRO (5-shot) |41.49|50.22|51.85|
## π§© Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
# First 8 layers: No replication
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [0, 8]
# Next 8 layers: Replicate 2 layers
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [8, 9]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [8, 9]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [9, 13]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [13, 14]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [13, 14]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [14, 16]
# Next 8 layers: Replicate 4 layers
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [16, 18]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [17, 19]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [18, 20]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [19, 21]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [20, 22]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [21, 23]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [22, 24]
# Next 8 layers: Replicate all 8 layers
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [24, 25]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [24, 26]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [25, 27]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [26, 28]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [27, 29]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [28, 30]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [29, 31]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [30, 32]
# Middle 8 layers: Replicate all 8 layers
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [32, 33]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [32, 34]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [33, 35]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [34, 36]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [35, 37]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [36, 38]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [37, 39]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [38, 40]
# Next 8 layers: Replicate 4 layers
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [40, 42]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [41, 43]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [42, 44]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [43, 45]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [44, 46]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [45, 47]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [46, 48]
# Next 8 layers: Replicate 2 layers
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [48, 49]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [48, 49]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [49, 53]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [53, 54]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [53, 54]
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [54, 56]
# Last 8 layers: No replication
- sources:
- model: Qwen/Qwen2.5-32B-Instruct
layer_range: [56, 64]
merge_method: passthrough
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/BigQwen2.5-Echo-47B-Instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mlabonne/BigQwen2.5-52B-Instruct | mlabonne | 2025-04-28T22:53:23Z | 15 | 8 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"lazymergekit",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-23T18:03:16Z | ---
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
- lazymergekit
base_model:
- Qwen/Qwen2.5-32B-Instruct
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
model-index:
- name: BigQwen2.5-52B-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 79.29
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-52B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 59.81
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-52B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 17.82
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-52B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.94
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-52B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.45
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-52B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.22
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=mlabonne/BigQwen2.5-52B-Instruct
name: Open LLM Leaderboard
---
# BigQwen2.5-52B-Instruct

BigQwen2.5-52B-Instruct is a [Qwen/Qwen2-32B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main).
It applies the [mlabonne/Meta-Llama-3-120B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct/) recipe.
I made it due to popular demand but I haven't tested it so use it at your own risk. Β―\\\_(γ)_/Β―
## π Applications
It might be good for creative writing tasks. I recommend a context length of 32k but you can go up to 131,072 tokens in theory.
## π Evaluation
| Metric |BigQwen2.5-Echo-47B-Instruct|**BigQwen2.5-52B-Instruct**|Qwen2.5-32B-Instruct|
|-------------------|----:|----:|----:|
|Avg. |30.31|37.42|36.17|
|IFEval (0-Shot) |73.57|79.29|83.46|
|BBH (3-Shot) |44.52|59.81|56.49|
|MATH Lvl 5 (4-Shot)| 3.47|17.82|0|
|GPQA (0-shot) | 8.61| 6.94|11.74|
|MuSR (0-shot) |10.19|10.45|13.5|
|MMLU-PRO (5-shot) |41.49|50.22|51.85|
## π§© Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 16]
model: Qwen/Qwen2.5-32B-Instruct
- sources:
- layer_range: [8, 24]
model: Qwen/Qwen2.5-32B-Instruct
- sources:
- layer_range: [16, 32]
model: Qwen/Qwen2.5-32B-Instruct
- sources:
- layer_range: [24, 40]
model: Qwen/Qwen2.5-32B-Instruct
- sources:
- layer_range: [32, 48]
model: Qwen/Qwen2.5-32B-Instruct
- sources:
- layer_range: [40, 56]
model: Qwen/Qwen2.5-32B-Instruct
- sources:
- layer_range: [56, 64]
model: Qwen/Qwen2.5-32B-Instruct
merge_method: passthrough
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/BigQwen2.5-52B-Instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mlabonne/BigQwen2.5-125B-Instruct | mlabonne | 2025-04-28T22:53:21Z | 9 | 10 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"lazymergekit",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-72B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-72B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-09-23T15:28:45Z | ---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
pipeline_tag: text-generation
library_name: transformers
tags:
- mergekit
- merge
- lazymergekit
base_model:
- Qwen/Qwen2.5-72B-Instruct
---
# BigQwen2.5-125B-Instruct

BigQwen2.5-125B-Instruct is a [Qwen/Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main).
It applies the [mlabonne/Meta-Llama-3-120B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct/) recipe.
I made it due to popular demand but I haven't tested it so use it at your own risk. Β―\\\_(γ)_/Β―
## π Applications
It might be good for creative writing tasks. I recommend a context length of 32k but you can go up to 131,072 tokens in theory.
## π Evaluation
I think it's too big for the Open LLM Leaderboard, unfortunately. Here's some feedback from users (thanks a lot!):



## π§© Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 20]
model: Qwen/Qwen2.5-72B-Instruct
- sources:
- layer_range: [10, 30]
model: Qwen/Qwen2.5-72B-Instruct
- sources:
- layer_range: [20, 40]
model: Qwen/Qwen2.5-72B-Instruct
- sources:
- layer_range: [30, 50]
model: Qwen/Qwen2.5-72B-Instruct
- sources:
- layer_range: [40, 60]
model: Qwen/Qwen2.5-72B-Instruct
- sources:
- layer_range: [50, 70]
model: Qwen/Qwen2.5-72B-Instruct
- sources:
- layer_range: [60, 80]
model: Qwen/Qwen2.5-72B-Instruct
merge_method: passthrough
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/BigQwen2.5-125B-Instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
kathleenge/kd_0.0003_167_2 | kathleenge | 2025-04-28T22:52:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T22:51:22Z | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kathleenge
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AngelRaychev/0.5B-value-iteration_inner | AngelRaychev | 2025-04-28T22:50:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-04-28T22:49:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/Qwen3-32B-3bit | mlx-community | 2025-04-28T22:47:48Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"3-bit",
"region:us"
] | text-generation | 2025-04-28T22:31:54Z | ---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-32B
tags:
- mlx
---
# mlx-community/Qwen3-32B-3bit
This model [mlx-community/Qwen3-32B-3bit](https://huggingface.co/mlx-community/Qwen3-32B-3bit) was
converted to MLX format from [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-32B-3bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
gabrielbosse9/Umbr0x-7B-V3.1-3 | gabrielbosse9 | 2025-04-28T22:45:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T22:45:12Z | ---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** gabrielbosse9
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Light-R1-GGUF | mradermacher | 2025-04-28T22:44:51Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:Lingyue1/Light-R1",
"base_model:quantized:Lingyue1/Light-R1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-28T22:21:30Z | ---
base_model: Lingyue1/Light-R1
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Lingyue1/Light-R1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Light-R1-GGUF/resolve/main/Light-R1.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Light-R1-GGUF/resolve/main/Light-R1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Light-R1-GGUF/resolve/main/Light-R1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Light-R1-GGUF/resolve/main/Light-R1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Light-R1-GGUF/resolve/main/Light-R1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Light-R1-GGUF/resolve/main/Light-R1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Light-R1-GGUF/resolve/main/Light-R1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Light-R1-GGUF/resolve/main/Light-R1.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Light-R1-GGUF/resolve/main/Light-R1.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Light-R1-GGUF/resolve/main/Light-R1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Light-R1-GGUF/resolve/main/Light-R1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Light-R1-GGUF/resolve/main/Light-R1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mlx-community/Qwen3-14B-8bit | mlx-community | 2025-04-28T22:41:42Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-14B",
"base_model:quantized:Qwen/Qwen3-14B",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-04-28T22:39:11Z | ---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-14B
tags:
- mlx
---
# mlx-community/Qwen3-14B-8bit
This model [mlx-community/Qwen3-14B-8bit](https://huggingface.co/mlx-community/Qwen3-14B-8bit) was
converted to MLX format from [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-14B-8bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Subsets and Splits