--- base_model: - Lunzima/NQLSG-Qwen2.5-14B-OriginalFusion - Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8 - Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v5 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [Lunzima/NQLSG-Qwen2.5-14B-OriginalFusion](https://huggingface.co/Lunzima/NQLSG-Qwen2.5-14B-OriginalFusion) * [Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8](https://huggingface.co/Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8) * [Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v5](https://huggingface.co/Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v5) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8 merge_method: slerp tokenizer_source: base dtype: bfloat16 parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8 layer_range: [0, 24] - model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v5 layer_range: [0, 24] - sources: - model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8 layer_range: [24, 48] - model: Lunzima/NQLSG-Qwen2.5-14B-OriginalFusion layer_range: [24, 48] ```