OLMo Logo # Model Card for GRAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI GRAG (German Retrieval Augmented Generation) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025 Our Phi-3.5-Mini SFT model are trained on this [GRAG-SFT](https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) dataset. ## Model Details The core models released in this batch are the following: | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length | |------|--------|---------|-------------|-----------------|----------------| | [GRAG-Phi-CPT](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-CPT-HESSIAN-AI) | 507.47 million |32 | 3072 | 32 | 131072 | | [GRAG-Phi-SFT]() | | | | | | ### Model Description - **Developed by:** Avemio AI Team - **Supported by:** Hessian AI - **Model type:** a Transformer style autoregressive language model. - **Language(s) (NLP):** German, English - **License:** The code and model are released under Apache 2.0. - **Contact:** ### Model Sources - **Project Page:** - **Repositories:** - Core repo (training, inference, fine-tuning etc.): colab examples cpt ,sft , orpo - Evaluation code: github repo - Further fine-tuning code: - **Technical blog post:** ## Uses ### Inference Quickly get inference running with the following required installation: Now, proceed as usual with HuggingFace: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "avemio/GRAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Folge den Anweisungen des Benutzers. Bevor du deine finale Antwort gibst, schildere deine Überlegungen zur Lösung des Problems." messages = [ {"role": "system", "content": ""}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### [](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct#processing-long-texts) ### Fine-tuning We are providing a comprehensive Google Colab notebook to guide users through the process of fine-tuning our model, complete with detailed instructions, essential dependencies, and configurable settings. [Colab-Notebook](https://colab.research.google.com/drive/1U6aP3vIkABaCm7doGV1waHgTLvXNGbBp?usp=sharing). ## Evaluation The evaluation was performed using seven subsets, focusing on extraction recall, question answering (QA) with multiple references, and time difference reasoning. Relevant context and summarization were treated as distinct subsets, each playing a crucial role in the evaluation process. For relevant context, the model's ability to identify and extract pertinent information from the source material was assessed. In contrast, the summarization subset evaluated the model's capability to generate concise and accurate summaries based on the relevant context. Four evaluation metrics were employed across all subsets: language quality, overall correctness, instruction following, and an overall score. - **Language quality:** This metric focused on the overall linguistic quality of the outputs, considering factors such as grammar, fluency, and clarity. - **Overall correctness:** The accuracy and correctness of the content were evaluated under this metric. - **Instruction following:** This metric assessed the model's ability to follow specific instructions provided for each task. - **Overall score:** This metric combined the results from the previous three metrics, offering a comprehensive evaluation of the model's capabilities across all subsets. | Metric | [Vanila-Phi-3.5-Mini-4B](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) | [GRAG-Phi3.5-SFT-Mini-4B](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-SFT-HESSIAN-AI) | [GRAG-ORPO-Phi3-5-Mini-4B](https://huggingface.co/avemio/GRAG-PHI-3.5-MINI-4B-ORPO-HESSIAN-AI) | [GRAG-Merge-Phi3.5-Mini-4B]() | |------------------------------------------|---------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|-----------------------------| | **Average_language_quality** | 80.33 | 86.45 | | | | **extraction_recall_overall_score** | 64.43 | 65.68 | | | | **qa_multiple_references_overall_score** | 59.82 | 63.12 | | | ## Model Details ### Data For training data details, please see the [GRAG-SFT-Dataset](https://huggingface.co/datasets/avemio/GRAG-SFT-ShareGPT-HESSIAN-AI) documentation. ### Architecture | Parameter | GRAG-PHI-SFT | |-----------------------|-----------------------------------------------------------------------------------------------| | **d_model** | 3072 | | **num heads** | 32 | | **num layers** | 32 | | **MLP ratio** | 2.66 | | **LayerNorm type** | RMSNorm | | **pos embeddings** | RoPE | | **attention variant**| Standard Multi-Head Self Attention with sliding-window of 2047 | | **biases** | none | | **block type** | sequential | | **activation** | SiLU | | **sequence length** | 131072 | | **weight tying** | bfloat16 ### Hyperparameters | | **GRAG-PHI-SFT** | |-----------------------|------------------|---------------------|--------------------|--------------------| | warmup steps | 50 | | peak LR | 5.0E-07 | | weight decay | 0.1 | | LR schedule | linear | | gradient reduce dtype | FP32 | | optimizer state dtype | FP32 | ## Environmental Impact GRAG-PHI-SFT, running on NVIDIA A100 with 8 GPUs for 5 days, has an approximate power consumption as follows: It's important to note that the actual power consumption may vary depending on the specific workload and operational conditions. For accurate power consumption measurements, using dedicated power monitoring tools is recommended. | | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg CO₂e/KWh) | Carbon Emissions (tCO₂eq) | |-----------|------------|-----------------------------|--------------------------------|---------------------------| | GRAG-PHI-SFT | A100 ([Hessen AI supercomputer](https://hessian.ai/de/) | 0.288 MWh | | | | ## Bias, Risks, and Limitations Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content. Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology. Otherwise, many facts from GRAG-Phi-SFT or any LLM will often not be true, so they should be checked. ## Model Card Contact For errors in this model card, contact AVEMIO AI TEAM