Dataset Viewer
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

This dataset contains the fully prepared data, which has been tokenized and pre-shuffled, used to train the Pythia (deduplicated) models. You can find these models under the EleutherAI organization, and they are also listed in my Memorization Profiles collection.

This data is the same as the one found in EleutherAI/pile-deduped-pythia-preshuffled, but it is presented in a more manageable format. Instead of using the Megatron format utilized by the GPT-NeoX library, I have stored the data in a parquet format.

Format

The dataset has 3 columns:

  • uid: a sequential identified for the sequence (not present in the original dataset)
  • batch_idx: the index of the batch to which a sequence belongs to (not present in the original dataset).
  • token_ids: the tokenised texts, each of length 2049 tokens.

The dataset is split into 143 chunks (parquet files) and each chunk includes 1024000 sequences (rows) corresponding to 1000 batches, each formed by 1024 sequences. This means that each chunk corresponds to the data seen between between one checkpoint and the next. Specifically, the Pythia model checkpoints are available* at initialisation (step 0) and each 1000 steps (steps 1000, 2000, etc) up to the last checkpoint (step 143000). We reflect this structure into the filenames: train-001000.parquet, train-002000.parquet, ..., train-143000.parquet. Let's clarify the mapping between chunks and checkpoints with an example.

Example: Consider file train-001000.parquet. It contains sequences with batch_idx in [0, 999]. These sequences were "seen" by checkpoint 1000. Batches with batch_idx >= 1000 are only seen by later checkpoints.

*NOTE: Additional log-spaced checkpoints are available for the initial part of training (i.e., steps 1, 2, 4, 8, ..., 512). I did not create a separate file for these, but you can easily subset the first chunk (i.e., data/train-001000.parquet).

License

For the license, refer to the original dataset (EleutherAI/pile-deduped-pythia-preshuffled).

Acknowledgements

Kudos to LLM360/AmberDatasets, which inspired this release.

Interacting with the data

Besides clarity and ease-of-use, another great advantage of this release is that it allows users to easily interact with the data without downloading it. The parquet format plays nicely with the Hugging Face Hub. So you can use its integrations with external tools like DuckDB or pola-rs to run queries over the data. For example,

import duckdb as db

df = db.sql("""
SELECT batch_idx, count(1) as count 
FROM 'hf://datasets/pietrolesci/pile-deduped-pythia-preshuffled/data/*.parquet' 
GROUP BY batch_idx
""").df()
Downloads last month
58