--- task_categories: - text-generation language: - en size_categories: - 100B= 1000 are only seen by later checkpoints. *NOTE: Additional log-spaced checkpoints are available for the initial part of training (i.e., steps 1, 2, 4, 8, ..., 512). I did not create a separate file for these, but you can easily subset the first chunk (i.e., `data/train-001000.parquet`). ## License For the license, refer to the original dataset ([EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled)). ## Acknowledgements Kudos to [LLM360/AmberDatasets](https://huggingface.co/datasets/LLM360/AmberDatasets), which inspired this release. ## Interacting with the data Besides clarity and ease-of-use, another great advantage of this release is that it allows users to easily interact with the data without downloading it. The parquet format plays nicely with the Hugging Face Hub. So you can use its integrations with external tools like [DuckDB](https://huggingface.co/docs/hub/en/datasets-duckdb) or [pola-rs](https://huggingface.co/docs/hub/en/datasets-polars) to run queries over the data. For example, ```python import duckdb as db df = db.sql(""" SELECT batch_idx, count(1) as count FROM 'hf://datasets/pietrolesci/pile-deduped-pythia-preshuffled/data/*.parquet' GROUP BY batch_idx """).df() ```