modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Darkhn/Alkahest-V9.2-Unhinged-RP-Alpha-V1-Llama-3.3-70B | Darkhn | 2025-04-23T20:17:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:Darkhn/Unhinged-RP-Alpha-V1-Llama-3.3-70B",
"base_model:merge:Darkhn/Unhinged-RP-Alpha-V1-Llama-3.3-70B",
"base_model:TareksTesting/Alkahest-V9.2-LLaMa-70B",
"base_model:merge:TareksTesting/Alkahest-V9.2-LLaMa-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T15:46:03Z | ---
base_model:
- Darkhn/Unhinged-RP-Alpha-V1-Llama-3.3-70B
- TareksTesting/Alkahest-V9.2-LLaMa-70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [Darkhn/Unhinged-RP-Alpha-V1-Llama-3.3-70B](https://huggingface.co/Darkhn/Unhinged-RP-Alpha-V1-Llama-3.3-70B) as a base.
### Models Merged
The following models were included in the merge:
* [TareksTesting/Alkahest-V9.2-LLaMa-70B](https://huggingface.co/TareksTesting/Alkahest-V9.2-LLaMa-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TareksTesting/Alkahest-V9.2-LLaMa-70B
parameters:
weight: 0.5
density: 0.5
- model: Darkhn/Unhinged-RP-Alpha-V1-Llama-3.3-70B
parameters:
weight: 0.5
density: 0.5
base_model: Darkhn/Unhinged-RP-Alpha-V1-Llama-3.3-70B
merge_method: dare_ties
parameters:
normalize: false
int8_mask: true
tokenizer:
source: base
chat_template: llama3
dtype: bfloat16
name: Alkahest.X.Unhinged.Alpha
```
|
mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF | mradermacher | 2025-04-23T20:00:25Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Darkhn/Roleplay-Abliterated-Base-V3-Llama-3.3-70B",
"base_model:quantized:Darkhn/Roleplay-Abliterated-Base-V3-Llama-3.3-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T12:27:48Z | ---
base_model: Darkhn/Roleplay-Abliterated-Base-V3-Llama-3.3-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Darkhn/Roleplay-Abliterated-Base-V3-Llama-3.3-70B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Abliterated-Base-V3-Llama-3.3-70B.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Abliterated-Base-V3-Llama-3.3-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Abliterated-Base-V3-Llama-3.3-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Abliterated-Base-V3-Llama-3.3-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Abliterated-Base-V3-Llama-3.3-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Abliterated-Base-V3-Llama-3.3-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Abliterated-Base-V3-Llama-3.3-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Abliterated-Base-V3-Llama-3.3-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Abliterated-Base-V3-Llama-3.3-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Abliterated-Base-V3-Llama-3.3-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Abliterated-Base-V3-Llama-3.3-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Abliterated-Base-V3-Llama-3.3-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Roleplay-Abliterated-Base-V3-Llama-3.3-70B-GGUF/resolve/main/Roleplay-Abliterated-Base-V3-Llama-3.3-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
oksurya/my-finetuned-gpt2 | oksurya | 2025-04-23T17:37:31Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-04-23T17:17:27Z | ---
license: apache-2.0
---
|
Joy10/gemma-2b-docjoybot-lora | Joy10 | 2025-04-23T16:44:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"medical",
"en",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"arxiv:1910.09700",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-04-22T05:49:35Z | ---
library_name: transformers
tags:
- medical
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
language:
- en
base_model:
- google/gemma-2-2b-it
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
# 🩺 A Medical Reasoning Chatbot Based on Gemma-2B + LoRA
Trained a fine-tuned version of `google/gemma-2-2b-it` enhanced with LoRA adapters. It specializes in medical question answering and clinical reasoning using structured, step-by-step thought processes.
## 📌 Key Features
- 🧠 **Chain-of-Thought (CoT) Reasoning** for complex medical queries
- 🧪 Fine-tuned on 25,000 samples from [`FreedomIntelligence/medical-o1-reasoning-SFT`](https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT)
- 🧬 LoRA-based parameter-efficient tuning using Hugging Face PEFT + TRL
- 💡 Prompt template includes structured `<think>` tags to enhance reasoning clarity
- ⚡ Lightweight adapter (~10MB) for efficient deployment with the base model
## 🔍 Intended Use
This model is intended for **educational, research, and prototyping purposes** in the healthcare and AI domains. It performs best on medical diagnostic and reasoning tasks where step-by-step logical thinking is required.
> ⚠️ **Disclaimer**: This model is not intended for real-world clinical use without expert validation. It is a research-grade assistant only.
## 🏗️ How It Was Trained
- **Base Model**: `google/gemma-2-2b-it`
- **LoRA Config**: `r=8`, `alpha=16`, `dropout=0.05`
- **Frameworks**: `transformers`, `PEFT`, `TRL (SFTTrainer)`
- **Quantization**: 4-bit `nf4` for efficient inference using `bitsandbytes`
- **Hardware**: Trained on Kaggle GPU (T4), optimized for low-resource fine-tuning
## 💬 Prompt Format
```text
You are a helpful and knowledgeable AI medical assistant.
### Question:
{medical_question_here}
### Response:
<think>
{step-by-step_reasoning}
</think>
{final_answer}
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
osunlp/SAE_BioCLIP_24K_ViT-B-16_iNat21 | osunlp | 2025-04-23T16:17:45Z | 2 | 0 | null | [
"arxiv:2502.06755",
"license:mit",
"region:us"
] | null | 2025-02-21T02:09:41Z | ---
license: mit
---
# SAE for Imageomics's BioCLIP ViT-B/16 trained on iNat2021 Activations

* **Homepage:** https://osu-nlp-group.github.io/saev
* **Code:** https://github.com/OSU-NLP-Group/saev
* **Preprint:** https://arxiv.org/abs/2502.06755
* **Demos:** https://osu-nlp-group.github.io/saev#demos
* **Point of Contact:** [Sam Stevens](mailto:[email protected])
## Inference Instructions
Follow the instructions [here](https://osu-nlp-group.github.io/saev/saev/#inference-instructions).
|
Splintir/Nllb_dialecto | Splintir | 2025-04-23T12:13:30Z | 3 | 0 | peft | [
"peft",
"safetensors",
"m2m_100",
"arxiv:1910.09700",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:adapter:facebook/nllb-200-distilled-600M",
"region:us"
] | null | 2025-02-20T10:36:41Z | ---
base_model: facebook/nllb-200-distilled-600M
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
inumulaisk/finetuned_qwen_23_04_2025_v3 | inumulaisk | 2025-04-23T10:25:43Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-23T10:25:23Z | ---
base_model: unsloth/qwen2-1.5b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** inumulaisk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-1.5b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Elliott/LUFFY-Qwen-Instruct-7B | Elliott | 2025-04-23T07:55:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"reasoning",
"Zero-RL",
"conversational",
"arxiv:2504.14945",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-23T06:04:59Z | ---
base_model:
- Qwen/Qwen2.5-7B-Instruct
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- reasoning
- Zero-RL
---
# 📖Introduction

LUFFY is a reinforcement learning framework that bridges the gap between zero-RL and imitation learning by incorporating off-policy reasoning traces into the training process. Built upon GRPO, LUFFY combines on-policy rollouts with off-policy demonstrations during advantage estimation and introduces **policy shaping** via regularized importance sampling to emphasize low-probability yet crucial actions.
### Key Highlights:
- **Off-Policy Guidance:** Seamlessly integrates external reasoning traces to bootstrap learning from stronger models.
- **Dynamic Balance:** Learns when to imitate and when to explore, adapting over the course of training.
- **Policy Shaping:** Emphasizes important actions often ignored in standard policy gradients, enabling better generalization.
---
## Inference
Here’s an example of using LUFFY for inference:
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_path="Elliott/LUFFY-Qwen-Math-7B-Zero"
question = "which number is larger? 9.11 or 9.9?"
tokenizer = AutoTokenizer.from_pretrained(model_path)
messages = [{"role": "user", "content": question}]
chat = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
llm = LLM(model=model_path)
params = SamplingParams(temperature=0.6, max_tokens=8192)
outputs = llm.generate([chat], params)
print(outputs[0].outputs[0].text)
```
---
# 📃Evaluation
| **Model** | **AIME 2024** | **AIME 2025** | **AMC** | **MATH-500** | **Minerva** | **Olympiad** | **Avg.** |
|-----------------------------------|-------------|-------------|---------|---------------|-------------|---------------|----------|
| Qwen2.5-7B-Instruct | 11.9 | 7.6 | 44.1 | 74.6 | 30.5 | 39.7 | 34.7 |
| **LUFFY-Qwen-Instruct-7B** | **16.6** | **15.7** | **52.2** | **81.4** | **36.8** | **48.7** | **41.9** |
---
# 🌻Acknowledgement
LUFFY builds upon [veRL](https://github.com/volcengine/verl) and [deepscaler](https://github.com/agentica-project/rllm), and utilizes [vLLM](https://github.com/vllm-project/vllm) for inference. We utilize [Math-Verify](https://github.com/huggingface/Math-Verify) for math reasoning evaluation. We thank the open-source community for datasets and backbones, including [NuminaMath](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT), [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k), [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math), and [DeepSeek-R1](https://github.com/deepseek-ai/deepseek-r1) model.
Code: https://github.com/ElliottYan/LUFFY
# Citation
If you find our model, data, or evaluation code useful, please kindly cite our paper:
```bib
@misc{luffy,
title={Learning to Reason under Off-Policy Guidance},
author={Jianhao Yan and Yafu Li and Zican Hu and Zhi Wang and Ganqu Cui and Xiaoye Qu and Yu Cheng and Yue Zhang},
year={2025},
eprint={2504.14945},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2504.14945},
}
``` |
OpenMOSE/PRWKV-7-Qwen2.5-14B-Instruct-Preview-v0.1 | OpenMOSE | 2025-04-23T05:50:35Z | 0 | 2 | null | [
"RWKV",
"license:apache-2.0",
"region:us"
] | null | 2025-04-08T16:30:14Z | ---
license: apache-2.0
tags:
- RWKV
---
# PRWKV-7-Qwen2.5-14B-Instruct-Preview-v0.1 "Rina" with cxa075
<div align="center">
<img src="./PRWKV.png" style="border-radius: 15px; width: 60%; height: 60%; object-fit: cover; box-shadow: 10px 10px 20px rgba(0, 0, 0, 0.5); border: 2px solid white;" alt="PRWKV" />
</div>
## Model Overview
PRWKV-7-Qwen2.5-14B-Instruct "Rina" is a 14.2-billion-parameter RWKV model distilled from a Transformer-based teacher model (Qwen) through a multi-stage knowledge distillation (KD) process. The goal of this project was to imbue RWKV, a state model with RNN characteristics, with the reasoning ability and linguistic proficiency of a Transformer, while maintaining RWKV's streaming-friendly, attention-free efficiency.
Despite the architectural mismatch between the teacher and student models, this project demonstrates that cross-architecture distillation can not only work—it can thrive.
## What is RWKV-7?
RWKV-7 "Goose" with Expressive Dynamic State Evolution.
RWKV-7 can perform state tracking and recognize all regular languages, while retaining parallelizability of training
## Technical Specifications
- **Architecture**: RWKV-x070 "Goose"(RNN-based) https://github.com/BlinkDL/RWKV-LM
- **Architecture Modify**: "CXA075" - GQAStyle, no tokenshift,no groupnorm,gate,w clamp -0.5,Head=128
- **Parameters**: 14.2 billion (L48D5120 RWKVTimeMix + D13824 SwiGLU MLP)
- **Training Context Window**: 2048 (Stage1=2048, Stage2=2048)
- **Base Model**: Derived from Qwen/Qwen2.5-14B-Instruct
- **Development Stage**: Stage2 Knowledge Distillation, Experimental preview (no performance guarantees)
- **License**: Apache 2.0
## Key Innovations
This model builds upon and refines the attention replacement approaches pioneered by several notable projects, including:
- Qwerky7 (Qwen 2.5 72B + QRWKV7 Arch)
- Qwerky6 (Qwen 2.5 32B,72B + QRWKV7 Arch)
- ARWKV (Qwen 2.5 1.5B-7B + RWKV v7 Arch)
The primary advantage of using the RWKV architecture is the elimination of KV-Cache requirements, allowing for infinite context generation with static VRAM consumption.
### Distillation Process
#### Stage 1: Hidden State Alignment ("The Warm-up of Pain")
In this first stage, the student model was trained to match the internal hidden states of the teacher using MSE (Mean Squared Error) loss.
Think of it like this: we asked a neural RNN to think like a Transformer, and it screamed a little at first. 😅
For this stage, about 60M tokens were used. Alignment was crucial—without it, the KL divergence in Stage 2 remained impossibly high (often > 10.0), and the student simply learned to hallucinate polite nonsense. Hidden alignment served as the backbone, and techniques like SVD filtering and temporal loss were explored (and mildly cursed at).
#### Stage 2: KL-Divergence Distillation ("RWKV Learns to Talk")
Once the student model's neurons were sufficiently aligned, KL-divergence loss was used to transfer the teacher’s output distribution to the student.
Temperature was set to **1.0**, as higher values (e.g., 2.0) turned out to be a one-way ticket to syntactic chaos.
After roughly 30M tokens, the KL divergence steadily dropped to ~**0.39**, and the model began producing coherent, question-aware responses. In many cases, its outputs became indistinguishable from those of the Transformer teacher—though RWKV's generation speed was significantly higher.
Notably, context length was increased to **2048 tokens**, which proved critical. Earlier failures at context 768 (hi, 24B!) often resulted in the model memorizing just the System Prompt. Increasing the context allowed the model to capture full discourse, not just the opening monologue.
---
### Performance
- **KL divergence**: ~0.08 after 160M tokens
- **Pre-fill speed**: > 600 tokens/sec(in RTX4090)
- **Generation**: Fluent, structured, and context-aware
- **Inference**: 30+ tokens/sec even on consumer GPUs
## How to Use(in construction now)
- PC Requirements 24GB+ VRAM NVIDIA GPU(rocm also can use. but only fp16.)
- OS Windows WSL2 with CUDA or Linux
- install RWKV-Infer(see how to install) https://github.com/OpenMOSE/RWKV-Infer
- make folder "models" and put PRWKV7-cxa075-qwen14b-stage2-final.pth
- loadmodel(choose fp16 or fp6 or fp5 (dont choose FP8))
- need 32GB VRAM in FP16, 18GB VRAM in FP6
- Enjoy Text chats via Open-webui or Silly-Tavern :)
```
curl http://127.0.0.1:9000/loadmodel -X POST -H "Content-Type: application/json" -d '{"model_filename":"models/PRWKV7-cxa075-qwen14b-stage2-final.pth","model_viewname":"PRWKV7-cxa075 Qwen 2.5 14B","model_strategy":"fp6","adapter_filename":"","adapter_mode":"", "template":"qwen", "endtoken":"<|im_end|>"}'
```
3. you can use this model via OpenAI CompatibleAPI http://127.0.0.1:9000/v1 and set modelname "PRWKV7-cxa075 Qwen 2.5 14B"
## Training Infrastructure
- Hardware: 1 x AMD MI300X GPU
- Training Duration: 2 days(Stage1,2)
- Stage1 60MToken (LR1e-4)
- Stage2 160MToken (Temp=1.0 LR3e-5 -> 1e-6 CosineCurve)
- Stage3 800MToken(T.B.D)
## Acknowledgements
This work was made possible through the contributions of:
- SmerkyG - thank you for big help :)
- RecursalAI
- RWKV-Red-Team
- BlinkDL(RWKV v7 Architecture)
- https://github.com/OpenMOSE/RWKVInside
- https://github.com/OpenMOSE/RWKV-LM-RLHF
- https://github.com/OpenMOSE/RWKV-Infer
- https://huggingface.co/RWKV-Red-Team/ARWKV-7B-Preview-0.1
- https://huggingface.co/recursal/QRWKV6-32B-Instruct-Preview-v0.1
- https://huggingface.co/featherless-ai/Qwerky-72B-Preview
### Limitations
- Trained with only 220M tokens — while efficient, this is minimal by LLM standards
- MLP layers were frozen to preserve teacher knowledge and minimize compute
- Instruction-following behavior may be slightly less robust on complex or abstract prompts
---
### Future Plans
- Continue training to 100M+ tokens to reinforce rare cases and improve output stability
- Add light CE loss blending to increase specificity and factuality
- Explore PEFT (e.g., LoRA, IA3) for fine-tuning on specific domains
---
### Lessons (and Memes) from the Journey
- Every time KL spiked, I aged 2 years.
- Watching RWKV say "I don't know" in perfect English while hallucinating was… poetic.
- Temp=2.0? More like Temp=TOO.MUCH. 😵
- Learned the hard way that if your context is too short, RWKV becomes a System Prompt cosplayer.
- When KL finally dropped below 0.4, I cried. Then I benchmarked it.
This was a solo project with limited cloud budget and a lot of coffee-fueled nights. But RWKV-14B now speaks with reason, and that's worth every token.
## ❤️ Final Words
This project was built by a single player with limited compute, stubborn optimism, and a disturbing tolerance for GPU crashes.
If you're an LLM developer, you're not alone in talking to your loss graph like it's a houseplant.
If you’re thinking of distilling a transformer into an RNN, let me tell you:
> It’s like teaching a cat to speak Latin — but when it finally meows “E pluribus unum,”
> it’s all worth it.
Enjoy the model. More to come.
---
🛠️ *Built with open-source passion.
💬 Powered by caffeine.
🔥 Fueled by failure.
Pushing the limits of RNNs*
---
## License
Released under the Apache 2.0 license.
2025 OpenMOSE
https://x.com/_m0se_ |
xiwenc1/OpenRS-GRPO3 | xiwenc1 | 2025-04-22T20:46:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:knoveleng/open-rs",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-22T10:09:49Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
datasets: knoveleng/open-rs
library_name: transformers
model_name: OpenRS-GRPO3
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for OpenRS-GRPO3
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="xiwenc1/OpenRS-GRPO3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myopen-rs/huggingface/runs/m4v11vi7)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Llama3.2-3B-Reasoning-SFT-fTuned-v1-GGUF | mradermacher | 2025-04-22T17:53:40Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:Cagatayd/Llama3.2-3B-Reasoning-SFT-fTuned-v1",
"base_model:quantized:Cagatayd/Llama3.2-3B-Reasoning-SFT-fTuned-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-22T17:28:38Z | ---
base_model: Cagatayd/Llama3.2-3B-Reasoning-SFT-fTuned-v1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Cagatayd/Llama3.2-3B-Reasoning-SFT-fTuned-v1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Reasoning-SFT-fTuned-v1-GGUF/resolve/main/Llama3.2-3B-Reasoning-SFT-fTuned-v1.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Reasoning-SFT-fTuned-v1-GGUF/resolve/main/Llama3.2-3B-Reasoning-SFT-fTuned-v1.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Reasoning-SFT-fTuned-v1-GGUF/resolve/main/Llama3.2-3B-Reasoning-SFT-fTuned-v1.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Reasoning-SFT-fTuned-v1-GGUF/resolve/main/Llama3.2-3B-Reasoning-SFT-fTuned-v1.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Reasoning-SFT-fTuned-v1-GGUF/resolve/main/Llama3.2-3B-Reasoning-SFT-fTuned-v1.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Reasoning-SFT-fTuned-v1-GGUF/resolve/main/Llama3.2-3B-Reasoning-SFT-fTuned-v1.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Reasoning-SFT-fTuned-v1-GGUF/resolve/main/Llama3.2-3B-Reasoning-SFT-fTuned-v1.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Reasoning-SFT-fTuned-v1-GGUF/resolve/main/Llama3.2-3B-Reasoning-SFT-fTuned-v1.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Reasoning-SFT-fTuned-v1-GGUF/resolve/main/Llama3.2-3B-Reasoning-SFT-fTuned-v1.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Reasoning-SFT-fTuned-v1-GGUF/resolve/main/Llama3.2-3B-Reasoning-SFT-fTuned-v1.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Reasoning-SFT-fTuned-v1-GGUF/resolve/main/Llama3.2-3B-Reasoning-SFT-fTuned-v1.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Reasoning-SFT-fTuned-v1-GGUF/resolve/main/Llama3.2-3B-Reasoning-SFT-fTuned-v1.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ASethi04/google-gemma-2-9b-opc-sft-first-lora | ASethi04 | 2025-04-22T10:52:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"endpoints_compatible",
"region:us"
] | null | 2025-04-22T09:20:43Z | ---
base_model: google/gemma-2-9b
library_name: transformers
model_name: google-gemma-2-9b-opc-sft-first-lora
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for google-gemma-2-9b-opc-sft-first-lora
This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/google-gemma-2-9b-opc-sft-first-lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/vt0r0ynj)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Qwen2.5-14B-Instruct-LIMO-new-GGUF | mradermacher | 2025-04-21T11:51:20Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:PeterLauLukCh/Qwen2.5-14B-Instruct-LIMO-new",
"base_model:quantized:PeterLauLukCh/Qwen2.5-14B-Instruct-LIMO-new",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-21T11:18:49Z | ---
base_model: PeterLauLukCh/Qwen2.5-14B-Instruct-LIMO-new
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/PeterLauLukCh/Qwen2.5-14B-Instruct-LIMO-new
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-LIMO-new-GGUF/resolve/main/Qwen2.5-14B-Instruct-LIMO-new.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-LIMO-new-GGUF/resolve/main/Qwen2.5-14B-Instruct-LIMO-new.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-LIMO-new-GGUF/resolve/main/Qwen2.5-14B-Instruct-LIMO-new.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-LIMO-new-GGUF/resolve/main/Qwen2.5-14B-Instruct-LIMO-new.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-LIMO-new-GGUF/resolve/main/Qwen2.5-14B-Instruct-LIMO-new.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-LIMO-new-GGUF/resolve/main/Qwen2.5-14B-Instruct-LIMO-new.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-LIMO-new-GGUF/resolve/main/Qwen2.5-14B-Instruct-LIMO-new.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-LIMO-new-GGUF/resolve/main/Qwen2.5-14B-Instruct-LIMO-new.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-LIMO-new-GGUF/resolve/main/Qwen2.5-14B-Instruct-LIMO-new.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-LIMO-new-GGUF/resolve/main/Qwen2.5-14B-Instruct-LIMO-new.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-14B-Instruct-LIMO-new-GGUF/resolve/main/Qwen2.5-14B-Instruct-LIMO-new.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
faraya1/Qwen2.5-3B-RAG-API-Finetuned-step-180 | faraya1 | 2025-04-21T11:16:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-21T11:16:08Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Darkhn/Unnamed-Test-V3-Llama-3.3-70B | Darkhn | 2025-04-18T11:43:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Mawdistical/Draconic-Tease-70B",
"base_model:merge:Mawdistical/Draconic-Tease-70B",
"base_model:ReadyArt/Forgotten-Abomination-70B-v5.0",
"base_model:merge:ReadyArt/Forgotten-Abomination-70B-v5.0",
"base_model:SentientAGI/Dobby-Unhinged-Llama-3.3-70B",
"base_model:merge:SentientAGI/Dobby-Unhinged-Llama-3.3-70B",
"base_model:Steelskull/L3.3-MS-Nevoria-70b",
"base_model:merge:Steelskull/L3.3-MS-Nevoria-70b",
"base_model:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"base_model:merge:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-18T11:04:05Z | ---
base_model:
- SentientAGI/Dobby-Unhinged-Llama-3.3-70B
- ReadyArt/Forgotten-Abomination-70B-v5.0
- Steelskull/L3.3-MS-Nevoria-70b
- nbeerbower/Llama3.1-Gutenberg-Doppel-70B
- Mawdistical/Draconic-Tease-70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Steelskull/L3.3-MS-Nevoria-70b](https://huggingface.co/Steelskull/L3.3-MS-Nevoria-70b) as a base.
### Models Merged
The following models were included in the merge:
* [SentientAGI/Dobby-Unhinged-Llama-3.3-70B](https://huggingface.co/SentientAGI/Dobby-Unhinged-Llama-3.3-70B)
* [ReadyArt/Forgotten-Abomination-70B-v5.0](https://huggingface.co/ReadyArt/Forgotten-Abomination-70B-v5.0)
* [nbeerbower/Llama3.1-Gutenberg-Doppel-70B](https://huggingface.co/nbeerbower/Llama3.1-Gutenberg-Doppel-70B)
* [Mawdistical/Draconic-Tease-70B](https://huggingface.co/Mawdistical/Draconic-Tease-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ReadyArt/Forgotten-Abomination-70B-v5.0
parameters:
weight: 1.0
- model: Mawdistical/Draconic-Tease-70B
parameters:
weight: 1.0
- model: Steelskull/L3.3-MS-Nevoria-70b
parameters:
weight: 1.0
- model: SentientAGI/Dobby-Unhinged-Llama-3.3-70B
parameters:
weight: 1.0
- model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B
parameters:
weight: 1.0
merge_method: model_stock
base_model: Steelskull/L3.3-MS-Nevoria-70b
dtype: bfloat16
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
filter_wise: false
smooth: false
allow_negative_weights: false
chat_template: llama3
tokenizer:
source: base
```
|
RawandLaouini/ArabicEchoV2 | RawandLaouini | 2025-04-16T21:05:08Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-04-16T20:58:03Z |
# Model Card for RawandLaouini/ArabicEchoV2
## Model Details
### Model Description
This is a fine-tuned version of the `openai/whisper-medium` model, adapted for Arabic Automatic Speech Recognition (ASR) using LoRA (Low-Rank Adaptation). The model was trained on the custom `Whisper_Arabic_Merged_v6` dataset, containing 1183 audio samples, to improve transcription accuracy for Arabic speech.
- **Developed by**: Rawand Laouini
- **Finetuned from model**: `openai/whisper-medium`
- **Model type**: Transformer-based ASR model with LoRA
- **Language(s)**: Arabic
- **License**: MIT (or specify your preferred license)
- **Shared by**: Rawand Laouini
## Uses
### Direct Use
This model can be used for transcribing Arabic speech to text, ideal for applications like voice assistants, subtitle generation, or educational tools tailored to Arabic speakers.
### Out-of-Scope Use
The model should not be used for real-time transcription without optimization, nor for languages other than Arabic without retraining.
## Bias, Risks, and Limitations
Trained on the `Whisper_Arabic_Merged_v6` dataset, the model may reflect biases or limitations in dialectal coverage or audio quality. Performance may vary with different Arabic dialects or noisy conditions. Users should validate outputs for critical use.
## Recommendations
Test the model on your specific use case and consider expanding the dataset for better dialectal or noise robustness.
## How to Get Started with the Model
Use the following code to load and use the model:
```python
from transformers import WhisperProcessor, WhisperForConditionalGeneration
from peft import PeftModel
processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium")
model = PeftModel.from_pretrained(model, "RawandLaouini/ArabicEchoV2")
model.eval()
input_features = processor(audio, return_tensors="pt").input_features
predicted_ids = model.generate(input_features)
transcription = processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True)[0]
print(transcription)
```
## Training Details
### Training Data
- Dataset: `RawandLaouini/Whisper_Arabic_Merged_v6` (1183 samples)
- Training split: 946 examples
- Validation split: Manual evaluation on 50 examples
### Training Procedure
#### Preprocessing
Audio was processed to match Whisper's requirements, with input features extracted using the Whisper processor.
#### Training Hyperparameters
- Batch size: 1 (per device)
- Gradient accumulation steps: 1
- Learning rate: 1e-4
- Warmup steps: 100
- Max steps: 300
- Optimizer: AdamW
- Mixed precision: FP16
#### Speeds, Sizes, Times
- Training time: ~2.43 minutes for 300 steps
- Model size: Lightweight (LoRA adapters)
## Evaluation
### Testing Data
Manual evaluation on 50 examples from the validation split.
### Metrics
- Word Error Rate (WER): 0.2969
- Character Error Rate (CER): 0.0700
### Results
The model achieves a WER of 29.69% and CER of 7.00% on the manual evaluation set, indicating good transcription accuracy for Arabic speech.
## Environmental Impact
- **Hardware Type**: NVIDIA GPU (14.74 GiB)
- **Hours used**: ~0.04 hours (2.43 minutes)
- **Cloud Provider**: Local/Colab (unspecified)
- **Compute Region**: Unspecified
- **Carbon Emitted**: Minimal (estimated < 0.01 kg CO2e using Lacoste et al., 2019)
## Citation
### BibTeX:
```bibtex
@misc{laouini2025arabicechov2,
author = {Rawand Laouini},
title = {ArabicEchoV2: Fine-tuned Whisper-medium for Arabic ASR with LoRA},
year = {2025},
howpublished = {\url{https://huggingface.co/RawandLaouini/ArabicEchoV2}}
}
```
### APA:
Laouini, R. (2025). *ArabicEchoV2: Fine-tuned Whisper-medium for Arabic ASR with LoRA*. Retrieved from https://huggingface.co/RawandLaouini/ArabicEchoV2
## Model Card Authors
- Rawand Laouini
## Model Card Contact
- Email: [[email protected]]
|
ReadyArt/Llama_3.x_70b_Hexagon_Purple_V3_EXL2_5.0bpw_H8 | ReadyArt | 2025-04-15T23:17:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Nexesenex/Llama_3.1_70b_FLDx2-Tess3_abliterated_fusion_norm",
"base_model:merge:Nexesenex/Llama_3.1_70b_FLDx2-Tess3_abliterated_fusion_norm",
"base_model:Nexesenex/Llama_3.1_70b_HighPriestess_R1_V1",
"base_model:merge:Nexesenex/Llama_3.1_70b_HighPriestess_R1_V1",
"base_model:Nexesenex/Llama_3.3_70b_DarkHorse",
"base_model:merge:Nexesenex/Llama_3.3_70b_DarkHorse",
"base_model:Nexesenex/Llama_3.x_70b_SmarTricks_V1.01",
"base_model:merge:Nexesenex/Llama_3.x_70b_SmarTricks_V1.01",
"base_model:Steelskull/L3.3-Electra-R1-70b",
"base_model:merge:Steelskull/L3.3-Electra-R1-70b",
"base_model:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"base_model:merge:nbeerbower/Llama3.1-Gutenberg-Doppel-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2025-04-15T22:40:19Z | ---
base_model:
- nbeerbower/Llama3.1-Gutenberg-Doppel-70B
- Nexesenex/Llama_3.x_70b_SmarTricks_V1.01
- Nexesenex/Llama_3.1_70b_HighPriestess_R1_V1
- Steelskull/L3.3-Electra-R1-70b
- Nexesenex/Llama_3.3_70b_DarkHorse
- Nexesenex/Llama_3.1_70b_FLDx2-Tess3_abliterated_fusion_norm
library_name: transformers
tags:
- mergekit
- merge
---
# about
V3.0 changes (relatively minor update) :
- DarkHorse replace DoppelGangerR1 to add a bit of Negative Llama at the expense of a bit of Fallen Llama R1.
- A bit of Fallen Llama is recovered by using Smartricks instead of Smartracks as a base.
- Priestess is upgraded with Lumitron Lorablated.
- Tess is merged with Hitachi FLDx2 in the perplexity dropper model.
Electra R1 and GutenbergDoppel are kept as they were.
If you have V2 already, this model is quite similar, and the difference might not be worth a download.
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Nexesenex/Llama_3.x_70b_SmarTricks_V1.01](https://huggingface.co/Nexesenex/Llama_3.x_70b_SmarTricks_V1.01) as a base.
### Models Merged
The following models were included in the merge:
* [nbeerbower/Llama3.1-Gutenberg-Doppel-70B](https://huggingface.co/nbeerbower/Llama3.1-Gutenberg-Doppel-70B)
* [Nexesenex/Llama_3.1_70b_HighPriestess_R1_V1](https://huggingface.co/Nexesenex/Llama_3.1_70b_HighPriestess_R1_V1)
* [Steelskull/L3.3-Electra-R1-70b](https://huggingface.co/Steelskull/L3.3-Electra-R1-70b)
* [Nexesenex/Llama_3.3_70b_DarkHorse](https://huggingface.co/Nexesenex/Llama_3.3_70b_DarkHorse)
* [Nexesenex/Llama_3.1_70b_FLDx2-Tess3_abliterated_fusion_norm](https://huggingface.co/Nexesenex/Llama_3.1_70b_FLDx2-Tess3_abliterated_fusion_norm)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
models:
- model: Nexesenex/Llama_3.1_70b_FLDx2-Tess3_abliterated_fusion_norm
parameters:
weight: 1.0
- model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B
parameters:
weight: 1.0
- model: Nexesenex/Llama_3.1_70b_HighPriestess_R1_V1
parameters:
weight: 1.0
- model: Steelskull/L3.3-Electra-R1-70b
parameters:
weight: 1.0
- model: Nexesenex/Llama_3.3_70b_DarkHorse
parameters:
weight: 1.0
base_model: Nexesenex/Llama_3.x_70b_SmarTricks_V1.01
dtype: bfloat16
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
chat_template: auto
tokenizer:
source: union
```
|
Subsets and Splits