--- base_model: - mergekit-community/mergekit-model_stock-bzcrthr - grimjim/Llama-3-Instruct-abliteration-LoRA-8B - mergekit-community/mergekit-model_stock-bzcrthr - DreadPoor/Everything-COT-8B-r128-LoRA - mergekit-community/mergekit-model_stock-bzcrthr - surya-narayanan/clinical_knowledge - mergekit-community/mergekit-model_stock-bzcrthr - kik41/lora-type-narrative-llama-3-8b-v2 - mergekit-community/mergekit-model_stock-bzcrthr - Azazelle/Llama3-RP-Lora - mergekit-community/mergekit-model_stock-bzcrthr - surya-narayanan/sociology - mergekit-community/mergekit-model_stock-bzcrthr - dimasik1987/74f5bf43-4a1b-44bb-9b95-6b5631ccfc3e - mergekit-community/mergekit-model_stock-bzcrthr - DreadPoor/OpenBioLLM-8B-r64-LoRA - mergekit-community/mergekit-model_stock-bzcrthr - surya-narayanan/astronomy library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mergekit-community/mergekit-model_stock-bzcrthr](https://huggingface.co/mergekit-community/mergekit-model_stock-bzcrthr) + [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B) as a base. ### Models Merged The following models were included in the merge: * [mergekit-community/mergekit-model_stock-bzcrthr](https://huggingface.co/mergekit-community/mergekit-model_stock-bzcrthr) + [DreadPoor/Everything-COT-8B-r128-LoRA](https://huggingface.co/DreadPoor/Everything-COT-8B-r128-LoRA) * [mergekit-community/mergekit-model_stock-bzcrthr](https://huggingface.co/mergekit-community/mergekit-model_stock-bzcrthr) + [surya-narayanan/clinical_knowledge](https://huggingface.co/surya-narayanan/clinical_knowledge) * [mergekit-community/mergekit-model_stock-bzcrthr](https://huggingface.co/mergekit-community/mergekit-model_stock-bzcrthr) + [kik41/lora-type-narrative-llama-3-8b-v2](https://huggingface.co/kik41/lora-type-narrative-llama-3-8b-v2) * [mergekit-community/mergekit-model_stock-bzcrthr](https://huggingface.co/mergekit-community/mergekit-model_stock-bzcrthr) + [Azazelle/Llama3-RP-Lora](https://huggingface.co/Azazelle/Llama3-RP-Lora) * [mergekit-community/mergekit-model_stock-bzcrthr](https://huggingface.co/mergekit-community/mergekit-model_stock-bzcrthr) + [surya-narayanan/sociology](https://huggingface.co/surya-narayanan/sociology) * [mergekit-community/mergekit-model_stock-bzcrthr](https://huggingface.co/mergekit-community/mergekit-model_stock-bzcrthr) + [dimasik1987/74f5bf43-4a1b-44bb-9b95-6b5631ccfc3e](https://huggingface.co/dimasik1987/74f5bf43-4a1b-44bb-9b95-6b5631ccfc3e) * [mergekit-community/mergekit-model_stock-bzcrthr](https://huggingface.co/mergekit-community/mergekit-model_stock-bzcrthr) + [DreadPoor/OpenBioLLM-8B-r64-LoRA](https://huggingface.co/DreadPoor/OpenBioLLM-8B-r64-LoRA) * [mergekit-community/mergekit-model_stock-bzcrthr](https://huggingface.co/mergekit-community/mergekit-model_stock-bzcrthr) + [surya-narayanan/astronomy](https://huggingface.co/surya-narayanan/astronomy) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mergekit-community/mergekit-model_stock-bzcrthr+kik41/lora-type-narrative-llama-3-8b-v2 - model: mergekit-community/mergekit-model_stock-bzcrthr+surya-narayanan/sociology - model: mergekit-community/mergekit-model_stock-bzcrthr+surya-narayanan/clinical_knowledge - model: mergekit-community/mergekit-model_stock-bzcrthr+surya-narayanan/astronomy - model: mergekit-community/mergekit-model_stock-bzcrthr+DreadPoor/Everything-COT-8B-r128-LoRA - model: mergekit-community/mergekit-model_stock-bzcrthr+dimasik1987/74f5bf43-4a1b-44bb-9b95-6b5631ccfc3e - model: mergekit-community/mergekit-model_stock-bzcrthr+DreadPoor/OpenBioLLM-8B-r64-LoRA - model: mergekit-community/mergekit-model_stock-bzcrthr+Azazelle/Llama3-RP-Lora merge_method: model_stock base_model: mergekit-community/mergekit-model_stock-bzcrthr+grimjim/Llama-3-Instruct-abliteration-LoRA-8B dtype: bfloat16 ```