File size: 16,409 Bytes
b3d0305
 
 
fcdc4a1
 
b3d0305
 
 
 
fcdc4a1
b3d0305
 
 
 
 
 
 
 
 
bea7c47
fcdc4a1
bea7c47
 
 
fcdc4a1
9686bf0
fcdc4a1
bea7c47
 
 
 
f54229a
 
fcdc4a1
 
 
bea7c47
 
 
 
 
 
 
 
 
fcdc4a1
bea7c47
 
 
 
 
 
fcdc4a1
bea7c47
b372809
49f81d3
fcdc4a1
 
e7c75e2
bea7c47
 
 
 
 
 
 
 
 
 
 
ae3ac50
fcdc4a1
ae3ac50
bea7c47
 
 
 
 
 
ae3ac50
 
 
bea7c47
ae3ac50
 
bea7c47
 
 
 
ae3ac50
bea7c47
 
ae3ac50
bea7c47
 
ae3ac50
 
 
 
 
 
 
 
 
 
bea7c47
 
 
 
ae3ac50
bea7c47
ae3ac50
bea7c47
 
 
 
 
 
d30a1e5
bea7c47
 
 
 
 
 
 
 
 
 
 
 
 
cbf315e
fcdc4a1
cbf315e
60df963
 
 
 
 
 
 
 
cbf315e
bea7c47
 
 
fcdc4a1
19fc397
 
 
70e35af
 
 
 
 
 
 
bea7c47
 
 
 
fcdc4a1
da56574
 
 
 
 
 
 
 
 
 
 
 
 
bea7c47
 
 
 
fcdc4a1
a734d77
 
 
 
 
 
 
bea7c47
 
 
fcdc4a1
bea7c47
 
 
88d9e19
 
fcdc4a1
df68d9b
bea7c47
 
 
 
 
fcdc4a1
bea7c47
 
 
fcdc4a1
205b73e
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
---
license: mit
datasets:
- avemio/German-RAG-CPT-HESSIAN-AI
- avemio/German-RAG-SFT-ShareGPT-HESSIAN-AI
language:
- en
- de
base_model:
- avemio/German-RAG-PHI-3.5-MINI-4B-CPT-HESSIAN-AI
pipeline_tag: question-answering
tags:
- German
- RAG
- Retrieval
- Question-Answering
- Summarization
- Reasoning
---

# German-RAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI

<!-- Provide a quick summary of what the model is/does. -->

**German-RAG** (**G**erman **R**etrieval **A**ugmented **G**eneration) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025

Our German-RAG-PHI-SFT model are trained on this **[German-RAG-SFT](https://huggingface.co/datasets/avemio/German-RAG-SFT-ShareGPT-HESSIAN-AI) dataset.**

## Model Details

The core models released in this batch are the following: 
| Size | Training Tokens | 
|------|--------|
| [German-RAG-Phi-CPT](https://huggingface.co/avemio/German-RAG-PHI-3.5-MINI-4B-CPT-HESSIAN-AI)   | 507.47 million |
| [German-RAG-Phi-SFT](https://huggingface.co/avemio/German-RAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI) |  2.03 billion  |  
| [German-RAG-Phi-ORPO](https://huggingface.co/avemio/German-RAG-PHI-3.5-MINI-4B-ORPO-HESSIAN-AI) |  2.0577 billion  | 
### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** Avemio AI Team
- **Supported by:** Hessian AI
- **Model type:** a Transformer style autoregressive language model.
- **Language(s) (NLP):** German, English
- **License:** The code and model are released under Apache 2.0.
- **Contact:** [[email protected]](mailto:[email protected])


### Model Sources

<!-- Provide the basic links for the model. -->

- **Training Study:** [Training Study](https://avemio.digital/wp-content/uploads/2025/01/German-RAG-TRAINING-STUDY-Advancing-German-Language-AI-with-hessian-AI.pdf)
- **Repositories:** 
    - Training: [Colab-Notebook](https://colab.research.google.com/drive/18SH_aYLCnw1K7cRGOTTZ80y98V5Kquxb?usp=sharing)
    - Evaluation code: 
        - [German-RAG-LLM-HARD-BENCHMARK](https://github.com/avemio-digital/German-RAG-LLM-HARD-BENCHMARK.git)
        - [German-RAG-LLM-EASY-BENCHMARK](https://github.com/avemio-digital/German-RAG-LLM-EASY-BENCHMARK.git)
-  **Technical blog post:** 
<!-- - **Press release:** TODO -->

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Inference
Quickly get inference running with the following required installation:
Now, proceed as usual with HuggingFace:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
 
model_name = "avemio/German-RAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI"
 
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
im_end_token_id = tokenizer.convert_tokens_to_ids('<|im_end|>')
im_start_token_id = tokenizer.convert_tokens_to_ids('<|im_start|>')
 
messages = [
    {"role": "system", "content": "Folge den Anweisungen des Benutzers. Bevor du deine finale Antwort gibst, schildere deine Überlegungen zur Lösung des Problems."},
    {"role": "user", "content": "Ferdinand steht vor der Herausforderung, eine faire Besuchsregelung für seine drei Kinder zu finden, die den Bedürfnissen jedes einzelnen Kindes gerecht wird. Jedes Kind hat unterschiedliche Vorlieben und Bedürfnisse, die in den Besuchsplan integriert werden müssen. Er muss sicherstellen, dass die Regelung sowohl den Interessen der Kinder als auch den rechtlichen Vorgaben entspricht. Ferdinand hat eine Woche Zeit, um einen Vorschlag zu erarbeiten, den er mit seinem Anwalt besprechen kann."}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=False
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
 
generated_ids = model.generate(
    **model_inputs,
    max_length=2024,
    temperature=0.01,
    do_sample=False,
    #bos_token_id=im_start_token_id,
    eos_token_id=im_end_token_id,
    pad_token_id=tokenizer.eos_token_id,
    repetition_penalty=1.1,
    num_return_sequences=1,
    top_k=40,
    top_p=0.95,
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
 
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
 
```

### [](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct#processing-long-texts)

### Fine-tuning
We are providing a comprehensive Google Colab notebook to guide users through the process of fine-tuning our model, complete with detailed instructions, essential dependencies, and configurable settings.
 [Colab-Notebook](https://colab.research.google.com/drive/18SH_aYLCnw1K7cRGOTTZ80y98V5Kquxb?usp=sharing).

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->
The evaluation was performed using seven subsets, focusing on extraction recall, question answering (QA) with multiple references, and time difference reasoning. Relevant context and summarization were treated as distinct subsets, each playing a crucial role in the evaluation process. For relevant context, the model's ability to identify and extract pertinent information from the source material was assessed. In contrast, the summarization subset evaluated the model's capability to generate concise and accurate summaries based on the relevant context.

Four evaluation metrics were employed across all subsets: language quality, overall correctness, instruction following, and an overall score.

-   **Language quality:** This metric focused on the overall linguistic quality of the outputs, considering factors such as grammar, fluency, and clarity.
-   **Overall correctness:** The accuracy and correctness of the content were evaluated under this metric.
-   **Instruction following:** This metric assessed the model's ability to follow specific instructions provided for each task.
-   **Overall score:** This metric combined the results from the previous three metrics, offering a comprehensive evaluation of the model's capabilities across all subsets.


| Metric                                    | [Vanila-Phi-3.5-Mini-4B](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) | **[German-RAG-PHI-SFT](https://huggingface.co/avemio/German-RAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI)** | [German-RAG-PHI-ORPO](https://huggingface.co/avemio/German-RAG-PHI-3.5-MINI-4B-ORPO-HESSIAN-AI) | [German-RAG-PHI-MERGED]() | GPT-3.5-TURBO | 
|------------------------------------------|---------------------------------------------------------------------------------|--------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|-----------------------------|----------------|
| Average_language_quality             | 75.11                                                                          | **78.88**                                                                          | 78.13                                                                                          |85.41                             |91.86                |
| **OVERALL SCORES (weighted):**       |                                                                            |                                                                            |                                                                                           |                             |               |
| extraction_recall       | 18.0                                                                           | **37.5**                                                                           | 32.0                                                                                          |61.8                             |87.2                |
| qa_multiple_references | 65.8                                                                           | **70.6**                                                                           | 74.8                                                                                          |84.8                             |77.2                |
| qa_without_time_difference | 71.2                                                                           | **88.0**                                                                           | 87.3                                                                                          |88.0                             |83.1                |
| qa_with_time_difference | 64.6                                                                           | **89.3**                                                                           | 86.9                                                                                          |89.1                             |83.2                |
| relevant_context       | 72.3                                                                           | **72.8**                                                                           | 69.1                                                                                          |84.4                             |89.5                |
| summarizations         | 74.6                                                                           | **83.2**                                                                           | 81.1                                                                                          |84.9                             |86.9                |

## Model Details

### Data
For training data details, please see the [German-RAG-SFT-Dataset](https://huggingface.co/datasets/avemio/German-RAG-SFT-ShareGPT-HESSIAN-AI) documentation.

#### Description
The SFT tasks represent a focused approach to enhance model capabilities through specialized RAG examples. Most of these tasks were developed using synthetically enhanced data derived from the German Wikipedia, accessed through Cohere's prepared dataset on HuggingFace (licensed CC-BY-SA 4.0). This data was structured in a training knowledge graph where Question-Answer nodes were connected to both relevant and irrelevant Context nodes from the same Wikipedia page, creating a rich and challenging network of relationships for training. The only exceptions are the function calling dataset, which was derived and extended from Salesforce's XLAM Function calling dataset by including function call results and final answer generation, and the reasoning task which synthetic generation was inspired by the Paper from Tencent ([“Scaling Synthetic Data Creation with 1,000,000,000 Personas”](https://arxiv.org/abs/2406.20094)), to generate a diverse set of reasoning tasks across various domains.
This comprehensive set of SFT tasks ensures the model develops robust capabilities across a wide range of practical applications while maintaining consistent output formats and clear communication patterns. Each task type has been carefully designed to address specific business needs while maintaining high standards of accuracy and reliability, making them valuable tools for organizations looking to enhance their information processing and knowledge management capabilities.

#### Task Instruction Format
The implementation of these SFT tasks follows a carefully structured format designed for consistency and clarity. Each task begins with comprehensive system instructions often wrapped in XML tags that meta-define expected inputs, outputs, constraints, and example interactions. This standardization enables clear communication between the model and users while ensuring reliable results.
The context information utilized in these tasks is provided in a standardized JSON structure, including unique identifiers, source text, timestamps where relevant, and task-specific metadata. This format was specifically chosen to allow seamless integration with retrieved data from RAG systems, eliminating the need for additional formatting steps in production environments.
Source references are handled through a consistent system of numerical indices for context references, JSON-formatted citation markers, and clear time-difference notifications when temporal aspects are relevant. This systematic approach to referencing ensures traceability and reliability in the model's responses.
The implementation of these tasks within RAG systems can significantly improve organizational efficiency by reducing manual processing time, ensuring consistency in information handling, improving accuracy in data extraction and analysis, and enabling faster decision-making through better information access.

### Architecture


| Parameter            | German-RAG-PHI-SFT                                                                                   |
|-----------------------|-----------------------------------------------------------------------------------------------|
| **d_model**          | 3072                                                                                          |
| **num heads**        | 32                                                                                            |
| **num layers**       | 32                                                                                            |
| **MLP ratio**        | 2.66                                                                                          |
| **LayerNorm type**   | RMSNorm                                                                                       |
| **pos embeddings**   | RoPE                                                                                          |
| **attention variant**| Standard Multi-Head Self Attention with sliding-window of 2047                                |
| **biases**           | none                                                                                          |
| **block type**       | sequential                                                                                    |
| **activation**       | SiLU                                                                                          |
| **sequence length**  | 131072                                                                                        |
| **weight tying**     | bfloat16  

### Hyperparameters 


| Parameter                 | German-RAG-PHI-SFT       |
|---------------------------|--------------------|
| **warmup steps**          | 50                |
| **peak LR**               | 5.0E-07           |
| **weight decay**          | 0.1               |
| **LR schedule**           | linear            |
| **gradient reduce dtype** | FP32              |
| **optimizer state dtype** | FP32              |

## Environmental Impact

German-RAG-PHI-SFT, running on NVIDIA A100 with 40 GPUs for 5 days, has an approximate power consumption as follows:

It's important to note that the actual power consumption may vary depending on the specific workload and operational conditions. For accurate power consumption measurements, using dedicated power monitoring tools is recommended.

| Model          | GPU Type            | Power Consumption From GPUs | 
|----------------|---------------------|-----------------------------|
| German-RAG-PHI-SFT   | A100 ([Hessian AI supercomputer](https://hessian.ai/de/)) | 0.0144 MWh                    |

## Bias, Risks, and Limitations

Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.

Otherwise, many facts from German-RAG-PHI-SFT or any LLM will often not be true, so they should be checked.



## The German-RAG AI Team
[Marcel Rosiak](https://de.linkedin.com/in/marcel-rosiak)
[Soumya Paul](https://de.linkedin.com/in/soumya-paul-1636a68a)
[Siavash Mollaebrahim](https://de.linkedin.com/in/siavash-mollaebrahim-4084b5153?trk=people-guest_people_search-card)
[Zain ul Haq](https://de.linkedin.com/in/zain-ul-haq-31ba35196)