avemio-digital commited on
Commit
bea7c47
verified
1 Parent(s): 022cd43

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +171 -0
README.md ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
4
+
5
+
6
+ # Model Card for GRAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+ GRAG (German Retrieval Augmented Generation) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025
11
+ Our Phi-3.5-Mini SFT model are trained on this [GRAG-SFT](https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) dataset.
12
+
13
+ ## Model Details
14
+
15
+ The core models released in this batch are the following:
16
+ | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
17
+ |------|--------|---------|-------------|-----------------|----------------|
18
+ | [GRAG-Phi-CPT](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-CPT-HESSIAN-AI) | 507.47 million |32 | 3072 | 32 | 131072 |
19
+ | [GRAG-Phi-SFT]() | | | | | |
20
+
21
+ ### Model Description
22
+
23
+ <!-- Provide a longer summary of what this model is. -->
24
+
25
+ - **Developed by:** Avemio AI Team
26
+ - **Supported by:** Hessian AI
27
+ - **Model type:** a Transformer style autoregressive language model.
28
+ - **Language(s) (NLP):** German, English
29
+ - **License:** The code and model are released under Apache 2.0.
30
+ - **Contact:**
31
+
32
+
33
+ ### Model Sources
34
+
35
+ <!-- Provide the basic links for the model. -->
36
+
37
+ - **Project Page:**
38
+ - **Repositories:**
39
+ - Core repo (training, inference, fine-tuning etc.): colab examples cpt ,sft , orpo
40
+ - Evaluation code: github repo
41
+ - Further fine-tuning code:
42
+ - **Technical blog post:**
43
+ <!-- - **Press release:** TODO -->
44
+
45
+ ## Uses
46
+
47
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
48
+
49
+ ### Inference
50
+ Quickly get inference running with the following required installation:
51
+ Now, proceed as usual with HuggingFace:
52
+ ```python
53
+ from transformers import AutoModelForCausalLM, AutoTokenizer
54
+
55
+ model_name = "avemio/GRAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI"
56
+
57
+ model = AutoModelForCausalLM.from_pretrained(
58
+ model_name,
59
+ torch_dtype="auto",
60
+ device_map="auto"
61
+ )
62
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
63
+
64
+ prompt = "Folge den Anweisungen des Benutzers. Bevor du deine finale Antwort gibst, schildere deine 脺berlegungen zur L枚sung des Problems."
65
+ messages = [
66
+ {"role": "system", "content": ""},
67
+ {"role": "user", "content": prompt}
68
+ ]
69
+ text = tokenizer.apply_chat_template(
70
+ messages,
71
+ tokenize=False,
72
+ add_generation_prompt=True
73
+ )
74
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
75
+
76
+ generated_ids = model.generate(
77
+ **model_inputs,
78
+ max_new_tokens=512
79
+ )
80
+ generated_ids = [
81
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
82
+ ]
83
+
84
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
85
+
86
+ ```
87
+
88
+ ### [](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct#processing-long-texts)
89
+
90
+ ### Fine-tuning
91
+ We are providing a comprehensive Google Colab notebook to guide users through the process of fine-tuning our model, complete with detailed instructions, essential dependencies, and configurable settings.
92
+ [Colab-Notebook](https://colab.research.google.com/drive/1U6aP3vIkABaCm7doGV1waHgTLvXNGbBp?usp=sharing).
93
+
94
+ ## Evaluation
95
+
96
+ <!-- This section describes the evaluation protocols and provides the results. -->
97
+ The evaluation was performed using seven subsets, focusing on extraction recall, question answering (QA) with multiple references, and time difference reasoning. Relevant context and summarization were treated as distinct subsets, each playing a crucial role in the evaluation process. For relevant context, the model's ability to identify and extract pertinent information from the source material was assessed. In contrast, the summarization subset evaluated the model's capability to generate concise and accurate summaries based on the relevant context.
98
+
99
+ Four evaluation metrics were employed across all subsets: language quality, overall correctness, instruction following, and an overall score.
100
+
101
+ - **Language quality:** This metric focused on the overall linguistic quality of the outputs, considering factors such as grammar, fluency, and clarity.
102
+ - **Overall correctness:** The accuracy and correctness of the content were evaluated under this metric.
103
+ - **Instruction following:** This metric assessed the model's ability to follow specific instructions provided for each task.
104
+ - **Overall score:** This metric combined the results from the previous three metrics, offering a comprehensive evaluation of the model's capabilities across all subsets.
105
+
106
+ | | [Vanila-Phi-3.5-Mini-4B](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) | [GRAG-Phi3.5-SFT-Mini-4B](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI) | [GRAG-ORPO-Phi3-5-Mini-4B](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-ORPO-HESSIAN-AI) | [GRAG-Merge-Phi3.5-Mini-4B]() |
107
+ | --------------------------------- | -------- | ---------- | --------- | ------ | ------- |
108
+ | **Average_language_quality** | 80.33 | 86.45 | |
109
+ | extraction_recall_overall_score | 64.43 | 65.68 | |
110
+ | qa_multiple_references_overall_score | 59.82 | 63.12 | | |
111
+
112
+ ## Model Details
113
+
114
+ ### Data
115
+ For training data details, please see the [GRAG-SFT-Dataset](https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) documentation.
116
+
117
+ ### Architecture
118
+
119
+
120
+ | | **GRAG-PHI-SFT** |
121
+ |------------------------|-------------------|---------------------|--------------------|--------------------|------------------|
122
+ | d_model | 3072 |
123
+ | num heads | 32 |
124
+ | num layers | 32 |
125
+ | MLP ratio | 2.66 |
126
+ | LayerNorm type | RMSNorm |
127
+ | pos embeddings | RoPE |
128
+ | attention variant | Standard Multi-Head Self Attention with sliding-window of 2047 |
129
+ | biases | none |
130
+ | block type | sequential |
131
+ | activation | SiLU |
132
+ | sequence length | 131072 | |
133
+ | weight tying | bfloat16 |
134
+
135
+ ### Hyperparameters
136
+
137
+
138
+ | | **GRAG-PHI-SFT** |
139
+ |-----------------------|------------------|---------------------|--------------------|--------------------|
140
+ | warmup steps | 50 |
141
+ | peak LR | 5.0E-07 |
142
+ | weight decay | 0.1 |
143
+ | LR schedule | linear |
144
+ | gradient reduce dtype | FP32 |
145
+ | optimizer state dtype | FP32 |
146
+
147
+
148
+ ## Environmental Impact
149
+
150
+ GRAG-PHI-SFT, running on NVIDIA A100 with 8 GPUs for 5 days, has an approximate power consumption as follows:
151
+
152
+ It's important to note that the actual power consumption may vary depending on the specific workload and operational conditions. For accurate power consumption measurements, using dedicated power monitoring tools is recommended.
153
+
154
+ | | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg CO鈧俥/KWh) | Carbon Emissions (tCO鈧俥q) |
155
+ |-----------|------------|-----------------------------|--------------------------------|---------------------------|
156
+ | GRAG-PHI-SFT | A100 ([Hessen AI supercomputer](https://hessian.ai/de/) | 0.288 MWh | | | |
157
+
158
+ ## Bias, Risks, and Limitations
159
+
160
+ Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
161
+ Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
162
+
163
+ Otherwise, many facts from GRAG-Phi-SFT or any LLM will often not be true, so they should be checked.
164
+
165
+
166
+
167
+
168
+ ## Model Card Contact
169
+
170
+
171
+ For errors in this model card, contact AVEMIO AI TEAM