Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,23 @@
|
|
1 |
-
## Data Statistics
|
2 |
-
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
|
5 |
| Domain (#tokens/#samples) | Iteration 1 | Iteration 2 | Iteration 3 | Total |
|
6 |
| --- | --- | --- | --- | --- |
|
@@ -58,7 +75,7 @@
|
|
58 |
| painting | 374.41M | 429.63M | 96.57M | 900.61M |
|
59 |
| hobby | 150.23B | 42.78B | 44.05B | 237.06B |
|
60 |
| health | 191.20B | 427.93M | 18.43B | 210.06B |
|
61 |
-
| relationship | 21.87B | 3.69B | 129.60M |
|
62 |
| petroleum_and_natural_gas_engineering | 950.08M | 463.65M | 121.56M | 1.54B |
|
63 |
| optical_engineering | 2.54B | 253.06M | 263.99M | 3.06B |
|
64 |
| hydraulic_engineering | 57.36M | 75.40M | 3.65M | 136.41M |
|
@@ -71,13 +88,11 @@
|
|
71 |
| materials_science | 18.95B | 1.11B | 303.66M | 20.37B |
|
72 |
| weapons_science | 80.62M | 3.51B | 140.89M | 3.73B |
|
73 |
| gamble | 30.12B | 696.52M | 158.48M | 30.98B |
|
74 |
-
| Total |
|
75 |
|
76 |
## Data Construction Workflow
|
77 |
|
78 |
-
|
79 |
-
|
80 |
-
[finefineweb-data-workflow.pdf](https://prod-files-secure.s3.us-west-2.amazonaws.com/2a7193d5-2ba6-4119-b407-e81fda282197/04759271-c3ed-47de-99ca-8efaa76f2768/finefineweb-data-workflow.pdf)
|
81 |
|
82 |
The data construction workflow can be summarized as follows:
|
83 |
|
@@ -112,14 +127,12 @@ The data construction workflow can be summarized as follows:
|
|
112 |
|
113 |
## Domain-Domain Similarity Analysis
|
114 |
|
115 |
-
---
|
116 |
-
|
117 |
1. Perform proportional weighted sampling of the domain subsets based on the sample size of each domain, with a total of 1 billion tokens sampled from the domain subsets.
|
118 |
2. Use the BGE-M3 model to compute the embeddings of the samples in each domain subset, referred to as domain embeddings.
|
119 |
3. Use the BGE-M3 model to compute the embeddings of the samples in each benchmark, referred to as benchmark embeddings (bench embeddings).
|
120 |
4. Calculate the MMD distance and the Wasserstein distance between the domain embeddings and the benchmark embeddings.
|
121 |
|
122 |
-

|
|
|
|
|
96 |
|
97 |
The data construction workflow can be summarized as follows:
|
98 |
|
|
|
127 |
|
128 |
## Domain-Domain Similarity Analysis
|
129 |
|
|
|
|
|
130 |
1. Perform proportional weighted sampling of the domain subsets based on the sample size of each domain, with a total of 1 billion tokens sampled from the domain subsets.
|
131 |
2. Use the BGE-M3 model to compute the embeddings of the samples in each domain subset, referred to as domain embeddings.
|
132 |
3. Use the BGE-M3 model to compute the embeddings of the samples in each benchmark, referred to as benchmark embeddings (bench embeddings).
|
133 |
4. Calculate the MMD distance and the Wasserstein distance between the domain embeddings and the benchmark embeddings.
|
134 |
|
135 |
+

|
136 |
|
137 |
The results above reveal the following observations:
|
138 |
|
|
|
143 |
|
144 |
## Domain-Domain Duplication
|
145 |
|
|
|
|
|
146 |
Let $D_1, D_2, \dots, D_N$ represent $N$ distinct domains, where we select top-20 URLs for each domain $D_i$, denoted as $\{U_{i1}, U_{i2}, \dots, U_{i20}\}$,. The total set of URLs across all domains is represented as $\mathcal{U}$, and the total number of URLs is $M = |\mathcal{U}|$.
|
147 |
|
148 |
For each URL $U_k \in \mathcal{U}$, the term frequency (TF) is defined as the proportion of $U_k$ in the total set of URLs:
|
|
|
157 |
|
158 |
$\text{TF-IDF}(U_{ij}) = \text{TF}(U_{ij}) \times \text{IDF}(U_{ij})$
|
159 |
|
160 |
+

|
161 |
|
162 |
Using the TF-IDF values of all URLs within a domain, the domain-domain duplicate rate can be analyzed by comparing the **distribution** of TF-IDF values across domains. If a domain has many URLs with **high TF-IDF values**, it indicates that the domain’s URLs are relatively **unique** and significant within the entire set of URLs. Conversely, if a domain has many URLs with **low TF-IDF values**, it suggests that the domain's URLs are more **common** across other domains. Analyzing these values helps assess how similar or redundant a domain's content is in relation to others based on its URL composition.
|
163 |
|
|
|
165 |
|
166 |
## **Domain-Benchmark BPC-Acc Correlation**
|
167 |
|
|
|
|
|
168 |
Experimental method: Using 28 models (see the paper), we first calculate BPC for all domains to obtain a model ranking $R_D$. Similarly, we compute scores across all benchmarks to obtain a model ranking $R_M$. We then calculate the Spearman correlation between $R_D$ and $R_M$.
|
169 |
|
170 |
+

|
171 |
|
172 |
- For benchmarks like ARC, MMLU, GSM8K, HumanEval, and MBPP, STEM-related domains show higher correlation rankings, particularly mathematics, physics, and systems science.
|
173 |
- For TriviaQA, which emphasizes factual knowledge over reasoning, domains rich in world knowledge such as literature, history, and library science demonstrate higher correlation rankings.
|
174 |
|
175 |
## Bibtex
|
176 |
|
177 |
+
```bibtex
|
|
|
178 |
@misc{
|
179 |
title={FineFineWeb: A Comprehensive Study on Fine-grained Domain Web Corpus},
|
180 |
+
url={[https://huggingface.co/datasets/m-a-p/FineFineWeb](https://huggingface.co/datasets/m-a-p/FineFineWeb)},
|
181 |
author = {M-A-P, Ge Zhang*, Xinrun Du*, Zhimiao Yu*, Zili Wang*, Zekun Wang, Shuyue Guo, Tianyu Zheng, Kang Zhu, Jerry Liu, Shawn Yue, Binbin Liu, Zhongyuan Peng, Yifan Yao, Jack Yang, Ziming Li, Bingni Zhang, Wenhu Chen, Minghao Liu, Tianyu Liu, Xiaohuan Zhou, Qian Liu, Taifeng Wang+, Wenhao Huang+},
|
182 |
publisher={huggingface},
|
183 |
verision={v0.1.0},
|
184 |
month={December},
|
185 |
year={2024}
|
186 |
+
}
|
187 |
+
```
|