Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) MFANNv0.14.10 - bnb 4bits - Model creator: https://huggingface.co/netcat420/ - Original model: https://huggingface.co/netcat420/MFANNv0.14.10/ Original model description: --- base_model: - netcat420/MFANNv0.14 - MaziyarPanahi/Llama-3-8B-Instruct-v0.4 - netcat420/MFANNv0.13 library_name: transformers tags: - mergekit - merge --- # MFANNv0.14.10 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MaziyarPanahi/Llama-3-8B-Instruct-v0.4](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.4) as a base. ### Models Merged The following models were included in the merge: * [netcat420/MFANNv0.14](https://huggingface.co/netcat420/MFANNv0.14) * [netcat420/MFANNv0.13](https://huggingface.co/netcat420/MFANNv0.13) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: netcat420/MFANNv0.14 parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 - model: netcat420/MFANNv0.13 parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 merge_method: ties base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.4 parameters: normalize: true int8_mask: true dtype: float16 ```