Update README.md
Browse files
README.md
CHANGED
@@ -36,7 +36,11 @@ We used a mixture of the following datasets
|
|
36 |
- abacusai/HellaSwag_DPO_FewShot
|
37 |
|
38 |
# **Data Contamination Test Results**
|
39 |
-
|
|
|
|
|
|
|
|
|
40 |
|
41 |
### **Open LLM Leaderboard Evaluation Results**
|
42 |
| Model | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|
|
|
36 |
- abacusai/HellaSwag_DPO_FewShot
|
37 |
|
38 |
# **Data Contamination Test Results**
|
39 |
+
We generate our contamination numbers using https://github.com/swj0419/detect-pretrain-code-contamination/tree/master, with internlm2-20b-llama as our reference model.
|
40 |
+
luxia-21.4b-alignment-v1.2 has the following results:
|
41 |
+
| Model | ARC | MMLU | TruthfulQA | GSM8K |
|
42 |
+
|--------------------------------------|-------|---------|------------|--------|
|
43 |
+
| **luxia-21.4b-alignment-v1.2** | 0.00 | 0.07 | 0.13 | 0.34 |
|
44 |
|
45 |
### **Open LLM Leaderboard Evaluation Results**
|
46 |
| Model | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|