File size: 3,730 Bytes
bcd4cc7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
base_model:
- unsloth/Llama-3.2-1B-Instruct-bnb-4bit
- jayavibhav/llama3.2_1b_CoT
- Alelcv27/llama3.2-1b-math-code
- huyhoangt2201/llama-3.2-1b-chat-sql3-merged
- meta-llama/Llama-3.2-1B-Instruct
- autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1
- ank028/Llama-3.2-1B-Instruct-gsm8k
- qzhang-2024/Llama-3.2-1B-pre-trained
- ank028/Llama-3.2-1B-Instruct-medmcqa
- huyhoangt2201/llama-3.2-1b-sql_finetuned_billingual_3.0_merged
- student-abdullah/Llama3.2-1B_Hinglish-Medicine-Dataset_Finetuning_28-09
- meta-llama/Llama-3.2-1B
- autoprogrammer/Llama-3.2-1B-Instruct-medmcqa-zh-linear
- MLking2/llama-3.2-1b-medical
- ank028/Llama-3.2-1B-Instruct-commonsense_qa
library_name: transformers
tags:
- mergekit
- merge

---
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) as a base.

### Models Merged

The following models were included in the merge:
* [unsloth/Llama-3.2-1B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-bnb-4bit)
* [jayavibhav/llama3.2_1b_CoT](https://huggingface.co/jayavibhav/llama3.2_1b_CoT)
* [Alelcv27/llama3.2-1b-math-code](https://huggingface.co/Alelcv27/llama3.2-1b-math-code)
* [huyhoangt2201/llama-3.2-1b-chat-sql3-merged](https://huggingface.co/huyhoangt2201/llama-3.2-1b-chat-sql3-merged)
* [autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1](https://huggingface.co/autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1)
* [ank028/Llama-3.2-1B-Instruct-gsm8k](https://huggingface.co/ank028/Llama-3.2-1B-Instruct-gsm8k)
* [qzhang-2024/Llama-3.2-1B-pre-trained](https://huggingface.co/qzhang-2024/Llama-3.2-1B-pre-trained)
* [ank028/Llama-3.2-1B-Instruct-medmcqa](https://huggingface.co/ank028/Llama-3.2-1B-Instruct-medmcqa)
* [huyhoangt2201/llama-3.2-1b-sql_finetuned_billingual_3.0_merged](https://huggingface.co/huyhoangt2201/llama-3.2-1b-sql_finetuned_billingual_3.0_merged)
* [student-abdullah/Llama3.2-1B_Hinglish-Medicine-Dataset_Finetuning_28-09](https://huggingface.co/student-abdullah/Llama3.2-1B_Hinglish-Medicine-Dataset_Finetuning_28-09)
* [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B)
* [autoprogrammer/Llama-3.2-1B-Instruct-medmcqa-zh-linear](https://huggingface.co/autoprogrammer/Llama-3.2-1B-Instruct-medmcqa-zh-linear)
* [MLking2/llama-3.2-1b-medical](https://huggingface.co/MLking2/llama-3.2-1b-medical)
* [ank028/Llama-3.2-1B-Instruct-commonsense_qa](https://huggingface.co/ank028/Llama-3.2-1B-Instruct-commonsense_qa)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
merge_method: ties
architectures: ["transformer"]
base_model: meta-llama/Llama-3.2-1B-Instruct
models:
  - model: Alelcv27/llama3.2-1b-math-code
  - model: huyhoangt2201/llama-3.2-1b-sql_finetuned_billingual_3.0_merged
  - model: autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1
  - model: meta-llama/Llama-3.2-1B-Instruct
  - model: autoprogrammer/Llama-3.2-1B-Instruct-medmcqa-zh-linear
  - model: meta-llama/Llama-3.2-1B
  - model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit
  - model: MLking2/llama-3.2-1b-medical
  - model: jayavibhav/llama3.2_1b_CoT
  - model: huyhoangt2201/llama-3.2-1b-chat-sql3-merged
  - model: student-abdullah/Llama3.2-1B_Hinglish-Medicine-Dataset_Finetuning_28-09
  - model: qzhang-2024/Llama-3.2-1B-pre-trained
  - model: ank028/Llama-3.2-1B-Instruct-medmcqa
  - model: ank028/Llama-3.2-1B-Instruct-gsm8k
  - model: ank028/Llama-3.2-1B-Instruct-commonsense_qa

parameters:
  density: 0.5
  weight: 1.0
```