File size: 8,397 Bytes
6f17c01 dba4a7f 22aa838 dba4a7f 22aa838 dba4a7f 22aa838 dba4a7f 22aa838 8faf870 22aa838 dba4a7f 3ecf53c 22aa838 3ecf53c 22aa838 3ecf53c 22aa838 3ecf53c 22aa838 3ecf53c 6f17c01 3ecf53c 6f17c01 3ecf53c 6f17c01 3ecf53c 6f17c01 3ecf53c 6f17c01 3ecf53c 6f17c01 3ecf53c 6f17c01 3ecf53c 6f17c01 3ecf53c 6f17c01 3ecf53c 6f17c01 3ecf53c 6f17c01 3ecf53c 22aa838 3ecf53c 22aa838 6f17c01 3ecf53c 6f17c01 3ecf53c 22aa838 3ecf53c 6f17c01 3ecf53c 6f17c01 22aa838 3ecf53c 8faf870 22aa838 3ecf53c 22aa838 6f17c01 3ecf53c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 |
---
# π¦ Model Card
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo # Gradient Reward Policy Optimization
license: apache-2.0
language:
- en
---
# π¦ Uploaded Model
| **Field** | **Value** |
|-----------------------|--------------------------------------------|
| **Developed by** | **MasterControlAIML** |
| **License** | Apache 2.0 |
| **Finetuned from** | `unsloth/Qwen2.5-3B-Instruct` |
| **Training Framework**| [Unsloth](https://github.com/unslothai/unsloth) Γ Hugging Face TRL |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="190"/>](https://github.com/unslothai/unsloth)
---
## π Whatβs New?
> *The protein-shake sequel to **MasterControlAIML/DeepSeek-R1-Qwen2.5-1.5b-SFT-R1-JSON-Unstructured-To-Structured**βnow with more neurons, zero SFT, and a league of reward functions.*
| Upgrade | Explanation |
|--------------------|------------------------------------------------------------------------------|
| **Bigger Backbone**| 1.5 B β **3 B** Qwen 2.5 for bigger reasoning muscles. |
| **Pure RL** | No supervised fine-tuningβpolicy learned *only* from reward signals (GRPO). |
| **LM-as-Judge** | Separate LLM rates each candidate for correctness, JSON validity, style⦠|
| **2Γ Faster Train**| Unslothβs flash-attention & fused ops = less VRAM, more speed. |
---
## π οΈ Intended Use
* Convert messy prose, logs, or audit notes into a pristine JSON document that follows a complex, nested schema.
* Drop-in replacement for any pipeline using the older DeepSeek-R1 1.5 B structurerβjust swap the checkpoint and enjoy the headroom.
---
## π§ How to Use (Reasoning + JSON)
The snippet below:
1. **Primes** the model with the *exact* Pydantic schema, so it outputs the right keys.
2. Makes the model **think step-by-step** (reasoning) but still wraps the final JSON in an easy-to-parse container.
3. Uses the correct repo name: `MasterControlAIML/DeepSeek-R1-Qwen2.5-3b-LLM-Judge-Reward-JSON-Unstructured-To-Structured-Lora`.
```python
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# QUICK-START
# Structured-data extraction with reasoning + JSON output
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch, json, textwrap, inspect
from pydantic import BaseModel
from typing import List, Optional
MODEL = "MasterControlAIML/DeepSeek-R1-Qwen2.5-3b-LLM-Judge-Reward-JSON-Unstructured-To-Structured-Lora"
# 1οΈβ£ Inline schema (keeps the LLM on-rails) βββββββββββββββββββββββββββββββββ
class MultipleChoice(BaseModel):
question: str
options: List[str]
selected: str
class FormField(BaseModel):
fieldName: str
value: str
notes: Optional[str] = ""
class Calculation(BaseModel):
formula: str
result: str
notes: Optional[str] = ""
class Metadata(BaseModel):
reportDate: str
auditorId: Optional[str] = None
comments: Optional[str] = None
class Content(BaseModel):
paragraphs: List[str]
tables: List["Table"] # assume Table defined elsewhere
checkboxes: List["Checkbox"] # γ
multipleChoice: List[MultipleChoice]
formFields: List[FormField]
calculations: List[Calculation]
metadata: Optional[Metadata] = Metadata(reportDate="")
class Section(BaseModel):
id: str
title: str
content: Content
class Document(BaseModel):
documentTitle: str
documentDate: str
sections: List[Section]
SCHEMA_TEXT = inspect.getsource(Document)
# 2οΈβ£ Build prompts ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
SYSTEM_PROMPT = textwrap.dedent(f"""
You are an expert **data-extraction assistant**.
Extract structured info from unstructured text **exactly** following the Pydantic schema.
ββ Schema ββ
{SCHEMA_TEXT}
βββββββββββββ
Rules:
1. Follow the schema for keys & nesting.
2. Copy values verbatim when possible.
3. If a field is missing, return null.
4. Output your step-by-step reasoning first.
5. Then return ONLY the JSON inside this wrapper:
final answer[ json object: {{ ... }} ]
Format:
<reasoning>β¦</reasoning>
<answer>
final answer[ json object: {{ β¦ }} ]
</answer>
""").strip()
UNSTRUCTURED_TEXT = """
12 April 2025 β Onsite audit performed by Jane Smith.
Observations: Two fire extinguishers past expiry; emergency lights functional.
Calculations: Total extinguishers = 8, expired = 2 β 25 % overdue.
"""
USER_PROMPT = textwrap.dedent(f"""
### Task
Convert the following *hier* text to the schema.
### hier
{UNSTRUCTURED_TEXT}
""").strip()
# 3οΈβ£ Generate βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
tok = AutoTokenizer.from_pretrained(MODEL, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL,
device_map="auto",
torch_dtype=torch.bfloat16
)
gen = pipeline("text-generation", model=model, tokenizer=tok,
max_new_tokens=512, do_sample=False)
prompt = f"<|system|>\n{SYSTEM_PROMPT}\n<|user|>\n{USER_PROMPT}"
raw_out = gen(prompt)[0]["generated_text"]
# 4οΈβ£ Slice out the JSON βββββββββββββββββββββββββββββββββββββββββββββββββββββ
start = raw_out.find("final answer[")
end = raw_out.rfind("]") + 1
json_text = raw_out[start:].split("json object:")[-1].strip(" []\n")
data = json.loads(json_text) # β
Raises if malformed
print(raw_out) # reasoning + JSON
print("\nβ
Parsed object:\n", data)
````
### Why it Works π§
* **Schema-priming** ensures key-level fidelityβno βcreativeβ field names.
* **Chain-of-thought** improves factual extraction (was rewarded during GRPO).
* The `final answer[β¦]` wrapper makes downstream parsing a one-liner.
---
## ποΈ Training Recipe (Condensed)
| Setting | Value |
| -------------- | ------------------------------------------------------------------- |
| **Algorithm** | GRPO β policy β LM, reward LM β `Qwen2.5-7B` w/ JSON-validator head |
| **Epochs** | 3 (effective) |
| **Batch** | Grad-accum 8, bfloat16 |
| **Optimizer** | Fused AdamW |
| **Throughput** | β 45 k tokens/s on 8ΓA100 |
---
## π Evaluation (WIP)
| Metric | Status |
| ------------------------- | ------ |
| Exact-Match JSON Accuracy | π |
| Structural F1 | π |
| Valid-JSON Rate | π |
Stay tunedβnumbers landing faster than you can say βschema validation.β π°οΈ
---
## π€ Citation
```bibtex
@misc{bhaviktheslider_2025_unsloth_qwen2.5_3b_grpo,
title = {An Unsloth-accelerated GRPO-trained Qwen 2.5-3B for JSON structuring},
author = {MasterControlAIML},
year = {2025},
howpublished = {\url{https://huggingface.co/MasterControlAIML/DeepSeek-R1-Qwen2.5-3b-LLM-Judge-Reward-JSON-Unstructured-To-Structured-Lora}}
}
```
*May your JSON always parse and your losses always converge!* π
```
|