ZenMoore commited on
Commit
37e0f44
·
verified ·
1 Parent(s): e7c218a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -20
README.md CHANGED
@@ -1,6 +1,23 @@
1
- ## Data Statistics
2
-
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  | Domain (#tokens/#samples) | Iteration 1 | Iteration 2 | Iteration 3 | Total |
6
  | --- | --- | --- | --- | --- |
@@ -58,7 +75,7 @@
58
  | painting | 374.41M | 429.63M | 96.57M | 900.61M |
59
  | hobby | 150.23B | 42.78B | 44.05B | 237.06B |
60
  | health | 191.20B | 427.93M | 18.43B | 210.06B |
61
- | relationship | 21.87B | 3.69B | 129.60M | **25.69B** |
62
  | petroleum_and_natural_gas_engineering | 950.08M | 463.65M | 121.56M | 1.54B |
63
  | optical_engineering | 2.54B | 253.06M | 263.99M | 3.06B |
64
  | hydraulic_engineering | 57.36M | 75.40M | 3.65M | 136.41M |
@@ -71,13 +88,11 @@
71
  | materials_science | 18.95B | 1.11B | 303.66M | 20.37B |
72
  | weapons_science | 80.62M | 3.51B | 140.89M | 3.73B |
73
  | gamble | 30.12B | 696.52M | 158.48M | 30.98B |
74
- | Total | | | | |
75
 
76
  ## Data Construction Workflow
77
 
78
- ---
79
-
80
- [finefineweb-data-workflow.pdf](https://prod-files-secure.s3.us-west-2.amazonaws.com/2a7193d5-2ba6-4119-b407-e81fda282197/04759271-c3ed-47de-99ca-8efaa76f2768/finefineweb-data-workflow.pdf)
81
 
82
  The data construction workflow can be summarized as follows:
83
 
@@ -112,14 +127,12 @@ The data construction workflow can be summarized as follows:
112
 
113
  ## Domain-Domain Similarity Analysis
114
 
115
- ---
116
-
117
  1. Perform proportional weighted sampling of the domain subsets based on the sample size of each domain, with a total of 1 billion tokens sampled from the domain subsets.
118
  2. Use the BGE-M3 model to compute the embeddings of the samples in each domain subset, referred to as domain embeddings.
119
  3. Use the BGE-M3 model to compute the embeddings of the samples in each benchmark, referred to as benchmark embeddings (bench embeddings).
120
  4. Calculate the MMD distance and the Wasserstein distance between the domain embeddings and the benchmark embeddings.
121
 
122
- ![heatmaps0.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/2a7193d5-2ba6-4119-b407-e81fda282197/cbfdc924-3a2c-44cd-9b60-9706a0828045/heatmaps0.png)
123
 
124
  The results above reveal the following observations:
125
 
@@ -130,8 +143,6 @@ The results above reveal the following observations:
130
 
131
  ## Domain-Domain Duplication
132
 
133
- ---
134
-
135
  Let $D_1, D_2, \dots, D_N$ represent $N$ distinct domains, where we select top-20 URLs for each domain $D_i$, denoted as $\{U_{i1}, U_{i2}, \dots, U_{i20}\}$,. The total set of URLs across all domains is represented as $\mathcal{U}$, and the total number of URLs is $M = |\mathcal{U}|$.
136
 
137
  For each URL $U_k \in \mathcal{U}$, the term frequency (TF) is defined as the proportion of $U_k$ in the total set of URLs:
@@ -146,7 +157,7 @@ The TF-IDF value for each URL $U_{ij}$ in a specific domain $D_i$ is then comput
146
 
147
  $\text{TF-IDF}(U_{ij}) = \text{TF}(U_{ij}) \times \text{IDF}(U_{ij})$
148
 
149
- ![duplication.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/2a7193d5-2ba6-4119-b407-e81fda282197/e2e1c568-dd49-4538-a618-53a725cfd7ba/duplication.png)
150
 
151
  Using the TF-IDF values of all URLs within a domain, the domain-domain duplicate rate can be analyzed by comparing the **distribution** of TF-IDF values across domains. If a domain has many URLs with **high TF-IDF values**, it indicates that the domain’s URLs are relatively **unique** and significant within the entire set of URLs. Conversely, if a domain has many URLs with **low TF-IDF values**, it suggests that the domain's URLs are more **common** across other domains. Analyzing these values helps assess how similar or redundant a domain's content is in relation to others based on its URL composition.
152
 
@@ -154,25 +165,23 @@ As shown in the figure, most domains have low duplication rates, except for topi
154
 
155
  ## **Domain-Benchmark BPC-Acc Correlation**
156
 
157
- ---
158
-
159
  Experimental method: Using 28 models (see the paper), we first calculate BPC for all domains to obtain a model ranking $R_D$. Similarly, we compute scores across all benchmarks to obtain a model ranking $R_M$. We then calculate the Spearman correlation between $R_D$ and $R_M$.
160
 
161
- ![image (1).png](https://prod-files-secure.s3.us-west-2.amazonaws.com/2a7193d5-2ba6-4119-b407-e81fda282197/a9e30b43-be27-4ebb-a371-8b6c8dcb2041/image_(1).png)
162
 
163
  - For benchmarks like ARC, MMLU, GSM8K, HumanEval, and MBPP, STEM-related domains show higher correlation rankings, particularly mathematics, physics, and systems science.
164
  - For TriviaQA, which emphasizes factual knowledge over reasoning, domains rich in world knowledge such as literature, history, and library science demonstrate higher correlation rankings.
165
 
166
  ## Bibtex
167
 
168
- ---
169
-
170
  @misc{
171
  title={FineFineWeb: A Comprehensive Study on Fine-grained Domain Web Corpus},
172
- url={https://huggingface.co/datasets/m-a-p/FineFineWeb},
173
  author = {M-A-P, Ge Zhang*, Xinrun Du*, Zhimiao Yu*, Zili Wang*, Zekun Wang, Shuyue Guo, Tianyu Zheng, Kang Zhu, Jerry Liu, Shawn Yue, Binbin Liu, Zhongyuan Peng, Yifan Yao, Jack Yang, Ziming Li, Bingni Zhang, Wenhu Chen, Minghao Liu, Tianyu Liu, Xiaohuan Zhou, Qian Liu, Taifeng Wang+, Wenhao Huang+},
174
  publisher={huggingface},
175
  verision={v0.1.0},
176
  month={December},
177
  year={2024}
178
- }
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-classification
5
+ - text2text-generation
6
+ - text-generation
7
+ language:
8
+ - en
9
+ size_categories:
10
+ - n>1T
11
+ ---
12
+ # FineFineWeb: A Comprehensive Study on Fine-Grained Domain Web Corpus
13
+
14
+ arXiv: Coming Soon
15
+
16
+ Project Page: Coming Soon
17
+
18
+ Blog: Coming Soon
19
+
20
+ ## Data Statistics
21
 
22
  | Domain (#tokens/#samples) | Iteration 1 | Iteration 2 | Iteration 3 | Total |
23
  | --- | --- | --- | --- | --- |
 
75
  | painting | 374.41M | 429.63M | 96.57M | 900.61M |
76
  | hobby | 150.23B | 42.78B | 44.05B | 237.06B |
77
  | health | 191.20B | 427.93M | 18.43B | 210.06B |
78
+ | relationship | 21.87B | 3.69B | 129.60M | 25.69B |
79
  | petroleum_and_natural_gas_engineering | 950.08M | 463.65M | 121.56M | 1.54B |
80
  | optical_engineering | 2.54B | 253.06M | 263.99M | 3.06B |
81
  | hydraulic_engineering | 57.36M | 75.40M | 3.65M | 136.41M |
 
88
  | materials_science | 18.95B | 1.11B | 303.66M | 20.37B |
89
  | weapons_science | 80.62M | 3.51B | 140.89M | 3.73B |
90
  | gamble | 30.12B | 696.52M | 158.48M | 30.98B |
91
+ | Total | 4007.48B | 207.39B | 207.99B | 4422.86B |
92
 
93
  ## Data Construction Workflow
94
 
95
+ ![finefineweb-data-workflow](./assets/finefineweb-data-workflow.png)
 
 
96
 
97
  The data construction workflow can be summarized as follows:
98
 
 
127
 
128
  ## Domain-Domain Similarity Analysis
129
 
 
 
130
  1. Perform proportional weighted sampling of the domain subsets based on the sample size of each domain, with a total of 1 billion tokens sampled from the domain subsets.
131
  2. Use the BGE-M3 model to compute the embeddings of the samples in each domain subset, referred to as domain embeddings.
132
  3. Use the BGE-M3 model to compute the embeddings of the samples in each benchmark, referred to as benchmark embeddings (bench embeddings).
133
  4. Calculate the MMD distance and the Wasserstein distance between the domain embeddings and the benchmark embeddings.
134
 
135
+ ![domain-benchmark similarity](./assets/domain-benchmark%20similarity.png)
136
 
137
  The results above reveal the following observations:
138
 
 
143
 
144
  ## Domain-Domain Duplication
145
 
 
 
146
  Let $D_1, D_2, \dots, D_N$ represent $N$ distinct domains, where we select top-20 URLs for each domain $D_i$, denoted as $\{U_{i1}, U_{i2}, \dots, U_{i20}\}$,. The total set of URLs across all domains is represented as $\mathcal{U}$, and the total number of URLs is $M = |\mathcal{U}|$.
147
 
148
  For each URL $U_k \in \mathcal{U}$, the term frequency (TF) is defined as the proportion of $U_k$ in the total set of URLs:
 
157
 
158
  $\text{TF-IDF}(U_{ij}) = \text{TF}(U_{ij}) \times \text{IDF}(U_{ij})$
159
 
160
+ ![domain-domain URL duplication](./assets/duplication.png)
161
 
162
  Using the TF-IDF values of all URLs within a domain, the domain-domain duplicate rate can be analyzed by comparing the **distribution** of TF-IDF values across domains. If a domain has many URLs with **high TF-IDF values**, it indicates that the domain’s URLs are relatively **unique** and significant within the entire set of URLs. Conversely, if a domain has many URLs with **low TF-IDF values**, it suggests that the domain's URLs are more **common** across other domains. Analyzing these values helps assess how similar or redundant a domain's content is in relation to others based on its URL composition.
163
 
 
165
 
166
  ## **Domain-Benchmark BPC-Acc Correlation**
167
 
 
 
168
  Experimental method: Using 28 models (see the paper), we first calculate BPC for all domains to obtain a model ranking $R_D$. Similarly, we compute scores across all benchmarks to obtain a model ranking $R_M$. We then calculate the Spearman correlation between $R_D$ and $R_M$.
169
 
170
+ ![domain-benchmark BPC-Acc correlation](./assets/domain-benchmark%20correlation.png)
171
 
172
  - For benchmarks like ARC, MMLU, GSM8K, HumanEval, and MBPP, STEM-related domains show higher correlation rankings, particularly mathematics, physics, and systems science.
173
  - For TriviaQA, which emphasizes factual knowledge over reasoning, domains rich in world knowledge such as literature, history, and library science demonstrate higher correlation rankings.
174
 
175
  ## Bibtex
176
 
177
+ ```bibtex
 
178
  @misc{
179
  title={FineFineWeb: A Comprehensive Study on Fine-grained Domain Web Corpus},
180
+ url={[https://huggingface.co/datasets/m-a-p/FineFineWeb](https://huggingface.co/datasets/m-a-p/FineFineWeb)},
181
  author = {M-A-P, Ge Zhang*, Xinrun Du*, Zhimiao Yu*, Zili Wang*, Zekun Wang, Shuyue Guo, Tianyu Zheng, Kang Zhu, Jerry Liu, Shawn Yue, Binbin Liu, Zhongyuan Peng, Yifan Yao, Jack Yang, Ziming Li, Bingni Zhang, Wenhu Chen, Minghao Liu, Tianyu Liu, Xiaohuan Zhou, Qian Liu, Taifeng Wang+, Wenhao Huang+},
182
  publisher={huggingface},
183
  verision={v0.1.0},
184
  month={December},
185
  year={2024}
186
+ }
187
+ ```