Dataset Viewer

The dataset viewer should be available soon. Please retry later.

This dataset contains the fully prepared data, which has been tokenized and pre-shuffled, used to train the Pythia (deduplicated) models. You can find these models under the EleutherAI organization, and they are also listed in my Memorization Profiles collection.

This data is the same as the one found in EleutherAI/pile-deduped-pythia-preshuffled, but it is presented in a more manageable format. Instead of using the Megatron format utilized by the GPT-NeoX library, I have stored the data in a parquet format.

Format

The dataset has 3 columns:

  • uid: a sequential identified for the sequence (not present in the original dataset)
  • batch_idx: the index of the batch to which a sequence belongs to (not present in the original dataset).
  • token_ids: the tokenised texts, each of length 2049 tokens.

The dataset is split into 143 chunks (i.e., parquet files), each corresponding to the data seen by a checkpoint. The Pythia model checkpoints are available at initialisation (i.e., step 0) and each 1000 steps (i.e., steps 1000, 2000, etc) up to the last checkpoint (i.e., step 143000). We reflect this into the filenames: train-001000.parquet, train-002000.parquet, ..., train-143000.parquet.

Example: consider file train-001000.parquet. It contains sequences with batch_idx 0-999. These sequences where "seen" (i.e., the model took a gradient step on them) before taking checkpoint 1000.

Note that additional log-spaced checkpoints are available for the initial part of training (i.e., steps 1, 2, 4, 8, ..., 512). I did not create a separate file for these, but you can easily subset the first chunk (i.e., data/train-001000.parquet).

License

For the license, refer to the original dataset (EleutherAI/pile-deduped-pythia-preshuffled).

Acknowledgements

Kudos to LLM360/AmberDatasets, which inspired this release.

Downloads last month
58