nthehai01's picture
Upload folder using huggingface_hub
cdc7a25 verified
---
base_model:
- Qwen/Qwen2.5-7B-Instruct
- Qwen/Qwen2.5-Coder-7B
- Qwen/Qwen2.5-Math-7B
- Qwen/Qwen2.5-7B
library_name: transformers
tags:
- mergekit
- merge
---
# nthehai01/Qwen2.5-7B-Instruct-Math-Code-dare-linear
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Performance
| Metric |Value|
|---------------------------------|----:|
|GSM8k (zero-shot) |87.79|
|HellaSwag (zero-Shot) |34.29|
|MBPP (zero-shot) |60.41|
## Merge Details
### Merge Method
This model was merged using the [Linear DARE](https://arxiv.org/abs/2311.03099) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
* [Qwen/Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B)
* [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Qwen/Qwen2.5-7B
dtype: bfloat16
merge_method: dare_linear
parameters:
lambda: 0.690661354021995
normalize: 1.0
slices:
- sources:
- layer_range: [0, 28]
model: Qwen/Qwen2.5-7B
- layer_range: [0, 28]
model: Qwen/Qwen2.5-Math-7B
parameters:
density: 0.9593725853706829
weight: 0.11472446469404357
- layer_range: [0, 28]
model: Qwen/Qwen2.5-Coder-7B
parameters:
density: 0.768281938201547
weight: 0.11350094855547865
- layer_range: [0, 28]
model: Qwen/Qwen2.5-7B-Instruct
parameters:
density: 0.48528478746069637
weight: 0.6453505470133651
```