--- base_model: - nbeerbower/Lyra4-Gutenberg2-12B - Nitral-AI/Captain-Eris_Violet-GRPO-v0.420 - nbeerbower/Mistral-Nemo-Prism-12B - nbeerbower/mistral-nemo-bophades-12B - NeverSleep/Lumimaid-v0.2-12B - Nitral-AI/Wayfarer_Eris_Noctis-12B - Undi95/LocalC-12B-e2.0 - PygmalionAI/Pygmalion-3-12B - IntervitensInc/Mistral-Nemo-Base-2407-chatml - Delta-Vector/Ohashi-NeMo-12B - Delta-Vector/Francois-Huali-12B - anthracite-org/magnum-v4-12b - nbeerbower/mistral-nemo-gutenberg-12B-v4 - LatitudeGames/Wayfarer-12B - allura-org/Bigger-Body-12b - Delta-Vector/Rei-12B - nbeerbower/mistral-nemo-wissenschaft-12B - allura-org/MN-12b-RP-Ink - PocketDoc/Dans-SakuraKaze-V1.0.0-12b - PocketDoc/Dans-PersonalityEngine-V1.1.0-12b - PocketDoc/Dans-DangerousWinds-V1.1.0-12b - elinas/Chronos-Gold-12B-1.0 - natong19/Mistral-Nemo-Instruct-2407-abliterated - Fizzarolli/MN-12b-Sunrose - anthracite-org/magnum-v2.5-12b-kto library_name: transformers tags: - mergekit - merge --- # WTF this is ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/67c901ec25bc18e8f6abb8be/Hc4hdlhQ17L6zy2Wtq_Uc.jpeg) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details Basically. How many fucking models can you merge into 1 and still stay coherent, 24 is the answer (Though i might do a bigger one) ## Quants FP8: https://huggingface.co/SanXM1/Driftwood-12b-FP8/ EXL2: https://huggingface.co/NewEden/Delta-Vector_driftwood-exl2 ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [IntervitensInc/Mistral-Nemo-Base-2407-chatml](https://huggingface.co/IntervitensInc/Mistral-Nemo-Base-2407-chatml) as a base. ### Models Merged The following models were included in the merge: * [nbeerbower/Lyra4-Gutenberg2-12B](https://huggingface.co/nbeerbower/Lyra4-Gutenberg2-12B) * [Nitral-AI/Captain-Eris_Violet-GRPO-v0.420](https://huggingface.co/Nitral-AI/Captain-Eris_Violet-GRPO-v0.420) * [nbeerbower/Mistral-Nemo-Prism-12B](https://huggingface.co/nbeerbower/Mistral-Nemo-Prism-12B) * [nbeerbower/mistral-nemo-bophades-12B](https://huggingface.co/nbeerbower/mistral-nemo-bophades-12B) * [NeverSleep/Lumimaid-v0.2-12B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B) * [Nitral-AI/Wayfarer_Eris_Noctis-12B](https://huggingface.co/Nitral-AI/Wayfarer_Eris_Noctis-12B) * [Undi95/LocalC-12B-e2.0](https://huggingface.co/Undi95/LocalC-12B-e2.0) * [PygmalionAI/Pygmalion-3-12B](https://huggingface.co/PygmalionAI/Pygmalion-3-12B) * [Delta-Vector/Ohashi-NeMo-12B](https://huggingface.co/Delta-Vector/Ohashi-NeMo-12B) * [Delta-Vector/Francois-Huali-12B](https://huggingface.co/Delta-Vector/Francois-Huali-12B) * [anthracite-org/magnum-v4-12b](https://huggingface.co/anthracite-org/magnum-v4-12b) * [nbeerbower/mistral-nemo-gutenberg-12B-v4](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v4) * [LatitudeGames/Wayfarer-12B](https://huggingface.co/LatitudeGames/Wayfarer-12B) * [allura-org/Bigger-Body-12b](https://huggingface.co/allura-org/Bigger-Body-12b) * [Delta-Vector/Rei-12B](https://huggingface.co/Delta-Vector/Rei-12B) * [nbeerbower/mistral-nemo-wissenschaft-12B](https://huggingface.co/nbeerbower/mistral-nemo-wissenschaft-12B) * [allura-org/MN-12b-RP-Ink](https://huggingface.co/allura-org/MN-12b-RP-Ink) * [PocketDoc/Dans-SakuraKaze-V1.0.0-12b](https://huggingface.co/PocketDoc/Dans-SakuraKaze-V1.0.0-12b) * [PocketDoc/Dans-PersonalityEngine-V1.1.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.1.0-12b) * [PocketDoc/Dans-DangerousWinds-V1.1.0-12b](https://huggingface.co/PocketDoc/Dans-DangerousWinds-V1.1.0-12b) * [elinas/Chronos-Gold-12B-1.0](https://huggingface.co/elinas/Chronos-Gold-12B-1.0) * [natong19/Mistral-Nemo-Instruct-2407-abliterated](https://huggingface.co/natong19/Mistral-Nemo-Instruct-2407-abliterated) * [Fizzarolli/MN-12b-Sunrose](https://huggingface.co/Fizzarolli/MN-12b-Sunrose) * [anthracite-org/magnum-v2.5-12b-kto](https://huggingface.co/anthracite-org/magnum-v2.5-12b-kto) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Delta-Vector/Rei-12B - model: natong19/Mistral-Nemo-Instruct-2407-abliterated - model: Nitral-AI/Captain-Eris_Violet-GRPO-v0.420 - model: Nitral-AI/Wayfarer_Eris_Noctis-12B - model: LatitudeGames/Wayfarer-12B - model: PygmalionAI/Pygmalion-3-12B - model: allura-org/Bigger-Body-12b - model: allura-org/MN-12b-RP-Ink - model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b - model: PocketDoc/Dans-DangerousWinds-V1.1.0-12b - model: PocketDoc/Dans-PersonalityEngine-V1.1.0-12b - model: Delta-Vector/Ohashi-NeMo-12B - model: Delta-Vector/Francois-Huali-12B - model: anthracite-org/magnum-v4-12b - model: Undi95/LocalC-12B-e2.0 - model: NeverSleep/Lumimaid-v0.2-12B - model: Fizzarolli/MN-12b-Sunrose - model: anthracite-org/magnum-v2.5-12b-kto - model: elinas/Chronos-Gold-12B-1.0 - model: nbeerbower/mistral-nemo-bophades-12B - model: nbeerbower/mistral-nemo-gutenberg-12B-v4 - model: nbeerbower/mistral-nemo-wissenschaft-12B - model: nbeerbower/Mistral-Nemo-Prism-12B - model: nbeerbower/Lyra4-Gutenberg2-12B merge_method: model_stock base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml normalize: false int8_mask: true dtype: bfloat16 ```