Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using kromcomp/L3-Direv2-8B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: kromcomp/L3-Direv2-8B
chat_template: llama3
dtype: float32
merge_method: model_stock
modules:
default:
slices:
- sources:
- layer_range: [0, 32]
model: OEvortex/Emotional-llama-8B+PJMixers-Archive/ResplendentAI_Theory_of_Mind_Llama3-QLoRA
- layer_range: [0, 32]
model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
- layer_range: [0, 32]
model: kromcomp/L3-Direv2-8B
parameters:
filter_wise: 1.0
tokenizer:
pad_to_multiple_of: 32