Update README.md
Browse files
README.md
CHANGED
@@ -29,7 +29,7 @@ Let's clarify the mapping between chunks and checkpoints with an example.
|
|
29 |
Batches with `batch_idx` >= 1000 are only seen by later checkpoints.
|
30 |
|
31 |
|
32 |
-
*NOTE:
|
33 |
|
34 |
|
35 |
## License
|
@@ -39,4 +39,21 @@ For the license, refer to the original dataset ([EleutherAI/pile-deduped-pythia-
|
|
39 |
|
40 |
## Acknowledgements
|
41 |
|
42 |
-
Kudos to [LLM360/AmberDatasets](https://huggingface.co/datasets/LLM360/AmberDatasets), which inspired this release.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
Batches with `batch_idx` >= 1000 are only seen by later checkpoints.
|
30 |
|
31 |
|
32 |
+
*NOTE: Additional log-spaced checkpoints are available for the initial part of training (i.e., steps 1, 2, 4, 8, ..., 512). I did not create a separate file for these, but you can easily subset the first chunk (i.e., `data/train-001000.parquet`).
|
33 |
|
34 |
|
35 |
## License
|
|
|
39 |
|
40 |
## Acknowledgements
|
41 |
|
42 |
+
Kudos to [LLM360/AmberDatasets](https://huggingface.co/datasets/LLM360/AmberDatasets), which inspired this release.
|
43 |
+
|
44 |
+
|
45 |
+
## Interacting with the data
|
46 |
+
|
47 |
+
Besides clarity and ease-of-use, another great advantage of this release is that it allows users to easily interact with the data without downloading it.
|
48 |
+
The parquet format plays nicely with the Hugging Face Hub. So you can use its integrations with external tools like [DuckDB](https://huggingface.co/docs/hub/en/datasets-duckdb) or [pola-rs](https://huggingface.co/docs/hub/en/datasets-polars) to run queries over the data.
|
49 |
+
For example,
|
50 |
+
|
51 |
+
```python
|
52 |
+
import duckdb as db
|
53 |
+
|
54 |
+
df = db.sql("""
|
55 |
+
SELECT batch_idx, count(1) as count
|
56 |
+
FROM 'hf://datasets/pietrolesci/pile-deduped-pythia-preshuffled/data/*.parquet'
|
57 |
+
GROUP BY batch_idx
|
58 |
+
""").df()
|
59 |
+
```
|