Update README.md
Browse files
README.md
CHANGED
@@ -88,21 +88,29 @@ The plot below highlights the alignment comparison of the model trained with Con
|
|
88 |

|
89 |
|
90 |
### Benchmark Results Table
|
91 |
-
The table below summarizes the evaluation results across mathematical tasks and original capabilities
|
92 |
|
93 |
-
| **Model**
|
94 |
-
|
95 |
-
| Llama3.1-8B-
|
96 |
-
| OpenMath2-Llama3
|
97 |
-
| **Full
|
98 |
-
| Partial
|
99 |
-
| Stack Expansion
|
100 |
-
| Hybrid Expansion
|
101 |
-
| **Control LLM***
|
102 |
|
103 |
---
|
104 |
|
105 |
-
### Explanation of
|
106 |
-
- **
|
107 |
-
- **
|
108 |
-
- **
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
88 |

|
89 |
|
90 |
### Benchmark Results Table
|
91 |
+
The table below summarizes the evaluation results across mathematical tasks and original capabilities.
|
92 |
|
93 |
+
| **Model** | **MathH** | **Math** | **GSM8K** | **Math Avg.** | **ARC** | **GPQA** | **MMLU** | **MMLUP** | **Orig. Avg.** | **Overall** |
|
94 |
+
|-------------------|-----------|----------|-----------|---------------|---------|----------|----------|-----------|----------------|-------------|
|
95 |
+
| Llama3.1-8B-Inst | 23.7 | 50.9 | 85.6 | 52.1 | 83.4 | 29.9 | 72.4 | 46.7 | 60.5 | 56.3 |
|
96 |
+
| OpenMath2-Llama3 | 38.4 | 64.1 | 90.3 | 64.3 | 45.8 | 1.3 | 4.5 | 19.5 | 12.9 | 38.6 |
|
97 |
+
| **Full Tune** | **38.5** | **63.7** | 90.2 | **63.9** | 58.2 | 1.1 | 7.3 | 23.5 | 16.5 | 40.1 |
|
98 |
+
| Partial Tune | 36.4 | 61.4 | 89.0 | 61.8 | 66.2 | 6.0 | 25.7 | 30.9 | 29.3 | 45.6 |
|
99 |
+
| Stack Expansion | 35.6 | 61.0 | 90.8 | 61.8 | 69.3 | 18.8 | 61.8 | 43.1 | 53.3 | 57.6 |
|
100 |
+
| Hybrid Expansion | 34.4 | 61.1 | 90.1 | 61.5 | **81.8**| **25.9** | 67.2 | **43.9** | 57.1 | 59.3 |
|
101 |
+
| **Control LLM*** | 38.1 | 62.7 | **90.4** | 63.2 | 79.7 | 25.2 | **68.1** | 43.6 | **57.2** | **60.2** |
|
102 |
|
103 |
---
|
104 |
|
105 |
+
### Explanation of Metrics
|
106 |
+
- **MathH**: MathHard
|
107 |
+
- **Math**: General math reasoning
|
108 |
+
- **GSM8K**: Grade-school math
|
109 |
+
- **Math Avg.**: Average performance across Math Hard, Math, and GSM8K
|
110 |
+
- **ARC**: AI reasoning challenge
|
111 |
+
- **GPQA**: General knowledge question answering
|
112 |
+
- **MMLU**: Massive Multitask Language Understanding
|
113 |
+
- **MMLUP**: MMLU (Professional subset)
|
114 |
+
- **Orig. Avg.**: Average original capabilities' performance across ARC, GPQA, MMLU, and MMLU Pro
|
115 |
+
- **Overall**: Combined average across all tasks
|
116 |
+
|