merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
parameters:
weight: 1.0
- model: jpacifico/Chocolatine-14B-Instruct-4k-DPO
parameters:
weight: 1.0
merge_method: task_arithmetic
base_model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
parameters:
normalize: true
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 30.21 |
IFEval (0-Shot) | 49.62 |
BBH (3-Shot) | 48.00 |
MATH Lvl 5 (4-Shot) | 14.58 |
GPQA (0-shot) | 12.19 |
MuSR (0-shot) | 14.95 |
MMLU-PRO (5-shot) | 41.90 |
- Downloads last month
- 31
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for allknowingroger/Ph3task3-14B
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard49.620
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard48.000
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard14.580
- acc_norm on GPQA (0-shot)Open LLM Leaderboard12.190
- acc_norm on MuSR (0-shot)Open LLM Leaderboard14.950
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard41.900