merge_model_v1 / README.md
mjm4dl's picture
Upload folder using huggingface_hub
3d1a14a verified
---
base_model:
- meta-llama/Llama-3.1-8B-Instruct
- mjm4dl/model_XY_llama3_Meta-Llama-3-8B-Instruct_1_128_4
- mjm4dl/merge_model_slot_filling_intent_cl
library_name: transformers
tags:
- mergekit
- merge
---
# output
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [mjm4dl/model_XY_llama3_Meta-Llama-3-8B-Instruct_1_128_4](https://huggingface.co/mjm4dl/model_XY_llama3_Meta-Llama-3-8B-Instruct_1_128_4)
* [mjm4dl/merge_model_slot_filling_intent_cl](https://huggingface.co/mjm4dl/merge_model_slot_filling_intent_cl)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mjm4dl/merge_model_slot_filling_intent_cl
parameters:
density: 0.4
weight: 0.5
- model: mjm4dl/model_XY_llama3_Meta-Llama-3-8B-Instruct_1_128_4
parameters:
density: 0.4
weight: 0.6
merge_method: ties
base_model: meta-llama/Llama-3.1-8B-Instruct
parameters:
normalize: true
dtype: float16
```