Update README.md
Browse files
README.md
CHANGED
@@ -29,21 +29,26 @@ MegaMath is curated via the following three efforts:
|
|
29 |
|
30 |
## MegaMath Compared to Existing Datasets
|
31 |
MegaMath is the largest open math pre-training dataset to date, surpassing DeepSeekMath (120B) tokens.
|
32 |
-
|
|
|
|
|
|
|
33 |
|
34 |
## MegaMath Delivers with High Quality
|
35 |
During development, we use extensive experiments to find optimal practice for text extraction, deduplication, fasttext training, etc. Training MegaMath data shows better performance than existing open datasets.
|
36 |
|
37 |
-
<
|
38 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293
|
39 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293/XSBJ_wVexM-0rk9bcpU5Q.png" width="
|
40 |
-
</
|
41 |
|
42 |
|
43 |
## Training MegaMath on Latest LMs
|
44 |
We also release two proof-of-concept models which is based on [Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) and [LLama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B).
|
45 |
Training MegaMath on Llama-3.2-1B and LLama-3.2-3B brings about 15% ~ 20% performance boost on 10 downstream benchmarks, demonstrateing its high data quality.
|
46 |
-
|
|
|
|
|
47 |
|
48 |
## Detailed Statistics
|
49 |
| **Category** | **# Sample(M)** | **# Toks(B)** | **Avg. (# Toks)** |
|
|
|
29 |
|
30 |
## MegaMath Compared to Existing Datasets
|
31 |
MegaMath is the largest open math pre-training dataset to date, surpassing DeepSeekMath (120B) tokens.
|
32 |
+
<div style="display: flex; justify-content: center; gap: 20px;">
|
33 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293/_yAulJQ1lcL56Dvqnxr4W.png" width="75%" />
|
34 |
+
</div>
|
35 |
+
![image/png]()
|
36 |
|
37 |
## MegaMath Delivers with High Quality
|
38 |
During development, we use extensive experiments to find optimal practice for text extraction, deduplication, fasttext training, etc. Training MegaMath data shows better performance than existing open datasets.
|
39 |
|
40 |
+
<div style="display: flex; justify-content: center; gap: 20px;">
|
41 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293/-E1tZP-vbU1ZPzy56cl4s.png" width="30%" />
|
42 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293/XSBJ_wVexM-0rk9bcpU5Q.png" width="30%" />
|
43 |
+
</div>
|
44 |
|
45 |
|
46 |
## Training MegaMath on Latest LMs
|
47 |
We also release two proof-of-concept models which is based on [Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) and [LLama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B).
|
48 |
Training MegaMath on Llama-3.2-1B and LLama-3.2-3B brings about 15% ~ 20% performance boost on 10 downstream benchmarks, demonstrateing its high data quality.
|
49 |
+
<div style="display: flex; justify-content: center; gap: 20px;">
|
50 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293/EIReQ8TIbyn1V3JfsEKiL.png" width="30%" />
|
51 |
+
</div>
|
52 |
|
53 |
## Detailed Statistics
|
54 |
| **Category** | **# Sample(M)** | **# Toks(B)** | **Avg. (# Toks)** |
|