---
base_model:
- brayene/DPO-Qwen2-1.5B-Instruct-Human-like
- prithivMLmods/QwQ-R1-Distill-1.5B-CoT
- Qwen/Qwen2.5-Math-1.5B
- MadeAgents/Hammer2.1-1.5b
library_name: transformers
tags:
- mergekit
- merge

---
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [prithivMLmods/QwQ-R1-Distill-1.5B-CoT](https://huggingface.co/prithivMLmods/QwQ-R1-Distill-1.5B-CoT) as a base.

### Models Merged

The following models were included in the merge:
* [brayene/DPO-Qwen2-1.5B-Instruct-Human-like](https://huggingface.co/brayene/DPO-Qwen2-1.5B-Instruct-Human-like)
* [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B)
* [MadeAgents/Hammer2.1-1.5b](https://huggingface.co/MadeAgents/Hammer2.1-1.5b)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: brayene/DPO-Qwen2-1.5B-Instruct-Human-like
  - model: Qwen/Qwen2.5-Math-1.5B
  - model: MadeAgents/Hammer2.1-1.5b



merge_method: model_stock
base_model: prithivMLmods/QwQ-R1-Distill-1.5B-CoT
parameters:
  normalize: false
  int8_mask: true
dtype: float16

```