Questions on Deduplication Strategy, Temporal Metadata and Document Types
I hope you're all doing well. I'm reaching out to better understand some aspects of the RedPajama-Data dataset, especially regarding deduplication strategies, temporal metadata, and content categorization. Any insights from the community would be greatly appreciated.
Temporal Deduplication
Does RedPajama-Data perform deduplication across different time points? For example:
If a webpage with the same URL (or near-identical content) appears in multiple years (e.g., 2013, 2014, 2015), is it deduplicated?
If so:
Which version is retained by default — earliest, latest, or highest-quality snapshot?
Are there specific tools or metrics used to evaluate content equivalence over time?Proposed Deduplication Strategy
We’re considering implementing a deduplication strategy based on the following rules:
(1) Default Rule: Retain only the first occurrence of a webpage.
(2) Exception Rule: Keep a subsequent crawl if:
- The content has undergone significant modification (e.g., expanded depth/breadth), or
- The new version is of higher quality.
Could the community share thoughts on this approach?
Does this align with best practices?
Are there known pitfalls or alternative strategies we should consider?
Has something like this already been implemented or is it under development in RedPajama-Data?
If not, are there recommended tools or toolchains for implementing such a strategy at scale?
How can one define "significant content difference" or "higher quality"? E.g.:
- Semantic similarity thresholds?
- Content length or structural changes?
- Quality heuristics (e.g., readability, domain authority)?
- Temporal Metadata
Could someone also confirm whether the dataset includes timestamps or modification history for each entry, such as:
Original creation or generation time
Update or revision timestamps
Versioning information or change logs
3.Also, we are interested in understanding whether RedPajama-Data includes specific types of documents, and if so, how they are categorized. Specifically, we would like to know if the dataset encompasses:
(1)Academic papers, particularly those falling under:
Computer Science
Mathematics, Physics, Medical Sciences
Economics and Finance
Other Humanities and Social Sciences
(2)News articles, especially within the categories of:
Financial and Economic news
Political and Societal news
Other categories
(3)Textbooks
(4)Mathematical logic and reasoning abilities, such as:
Logical deduction problems
Mathematical proofs or derivations
Symbolic reasoning tasks
Formal logic expressions or inference
Could you please confirm if these categories are represented in the dataset? Additionally, any information on how these categories are tagged or identified within the dataset would be greatly appreciated.
Thank you all very much for your time and contributions to this project. Looking forward to hearing your thoughts!