Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
koalazf99 commited on
Commit
d66022d
·
verified ·
1 Parent(s): 98a8334

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -3
README.md CHANGED
@@ -1,3 +1,72 @@
1
- ---
2
- license: odc-by
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: odc-by
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ tags:
8
+ - math
9
+ - code
10
+ - pre-training
11
+ - synthesis
12
+ size_categories:
13
+ - 1B<n<10B
14
+ ---
15
+
16
+ # MegaMath: Pushing the Limits of Open Math Copora
17
+
18
+ > Megamath is part of TxT360, curated by LLM360 Team.
19
+ <center><img src="teasor.png" alt="MegaMath Collection" /></center>
20
+
21
+ We introduce MegaMath, an open math pretraining dataset curated from diverse, math-focused sources, with over 300B tokens.
22
+ MegaMath is curated via the following three efforts:
23
+ - **Revisiting web data**:
24
+ We re-extracted mathematical documents from Common Crawl with math-oriented HTML optimizations, fasttext-based filtering and deduplication, all for acquiring higher-quality data on the Internet.
25
+ - **Recalling Math-related code data**:
26
+ We identified high quality math-related code from large code training corpus, Stack-V2, further enhancing data diversity.
27
+ - **Exploring Synthetic data**:
28
+ We synthesized QA-style text, math-related code, and interleaved text-code blocks from web data or code data.
29
+
30
+ ## MegaMath Compared to Existing Datasets
31
+ MegaMath is the largest open math pre-training dataset to date, surpassing DeepSeekMath (120B) tokens.
32
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293/_yAulJQ1lcL56Dvqnxr4W.png)
33
+
34
+ ## MegaMath Delivers with High Quality
35
+ During development, we use extensive experiments to find optimal practice for text extraction, deduplication, fasttext training, etc. Training MegaMath data shows better performance than existing open datasets.
36
+
37
+ <p float="left">
38
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293/Cft-qAGA4xNsqTVuRIend.png" width="45%" />
39
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293/XSBJ_wVexM-0rk9bcpU5Q.png" width="45%" />
40
+ </p>
41
+
42
+
43
+ ## Training MegaMath on Latest LMs
44
+ We also release two proof-of-concept models which is based on [Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) and [LLama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B).
45
+ Training MegaMath on Llama-3.2-1B and LLama-3.2-3B brings about 15% ~ 20% performance boost on 10 downstream benchmarks, demonstrateing its high data quality.
46
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/628f6e5ab90dde28ef57d293/EIReQ8TIbyn1V3JfsEKiL.png)
47
+
48
+ ## Detailed Statistics
49
+ | **Category** | **# Sample(M)** | **# Toks(B)** | **Avg. (# Toks)** |
50
+ |------------------------|----------------:|--------------:|------------------:|
51
+ | **Web Domain** | **121.5** | **279.0** | **2296.9** |
52
+ | Web | 106.5 | 263.9 | 2478.7 |
53
+ | Web-Pro | 15.0 | 15.1 | 1006.0 |
54
+ | **Code Domain** | **13.4** | **28.1** | **2102.7** |
55
+ | **Synthetic Data** | **80.2** | **64.5** | **804.5** |
56
+ | Translated Code | 7.4 | 7.2 | 979.5 |
57
+ | Q&A | 22.6 | 7.0 | 308.3 |
58
+ | Text&Code Block | 50.2 | 50.3 | 1002.1 |
59
+ | **Total** | **215.1** | **371.6** | **1727.6** |
60
+
61
+
62
+ ## Citation
63
+ If you use our dataset or find our work useful, please cite
64
+ ```bibtex
65
+ @article{zhou2025megamath,
66
+ title = {MegaMath: Pushing the Limits of Open Math Corpora},
67
+ author = {Zhou, Fan and Wang, Zengzhi and Ranjan, Nikhil and Cheng, Zhoujun and Tang, Liping and He, Guowei and Liu, Zhengzhong and Xing, Eric P.},
68
+ journal = {arXiv preprint arXiv:2504.xxxxx},
69
+ year = {2025},
70
+ note = {Preprint}
71
+ }
72
+ ```