|
--- |
|
dataset_info: |
|
features: |
|
- name: path |
|
dtype: string |
|
- name: concatenated_notebook |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 13378216977 |
|
num_examples: 781578 |
|
download_size: 5447349438 |
|
dataset_size: 13378216977 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
license: apache-2.0 |
|
task_categories: |
|
- text-generation |
|
language: |
|
- en |
|
tags: |
|
- jupyter |
|
- python |
|
- notebooks |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
|
|
# Merged Jupyter Notebooks Dataset |
|
|
|
## Introduction |
|
|
|
This dataset is a transformed version of the [Jupyter Code-Text Pairs](https://huggingface.co/datasets/bigcode/jupyter-code-text-pairs) dataset. The original dataset contains markdown, code, and output pairs extracted from Jupyter notebooks. This transformation merges these components into a single, cohesive format that resembles a Jupyter notebook, making it easier to analyze and understand the flow of information. |
|
|
|
## Dataset Details |
|
|
|
### Source |
|
|
|
The original dataset is sourced from the Hugging Face Hub, specifically the [bigcode/jupyter-code-text-pairs](https://huggingface.co/datasets/bigcode/jupyter-code-text-pairs) dataset. It contains pairs of markdown, code, and output from Jupyter notebooks. |
|
|
|
### Transformation Process |
|
|
|
Using the flexibility and efficiency of DuckDB, I processed the entire dataset without the need for heavy hardware. DuckDB's ability to handle large datasets efficiently allowed me to concatenate the markdown, code, and output for each notebook path into a single string, simulating the structure of a Jupyter notebook. |
|
|
|
The transformation was performed using the following DuckDB query: |
|
|
|
```python |
|
import duckdb |
|
|
|
#Connect to a new DuckDB database |
|
new_db = duckdb.connect('merged_notebooks.db') |
|
|
|
#Query to concatenate markdown, code, and output |
|
query = """ |
|
SELECT path, |
|
STRING_AGG(CONCAT('###Markdown\n', markdown, '\n###Code\n', code, '\n###Output\n', output), '\n') AS concatenated_notebook |
|
FROM read_parquet('jupyter-code-text-pairs/data/*.parquet') |
|
GROUP BY path |
|
""" |
|
|
|
#Execute the query and create a new table |
|
new_db.execute(f"CREATE TABLE concatenated_notebooks AS {query}") |
|
``` |
|
## Usage |
|
|
|
To replicate the transformation or explore the original dataset, you can download it using the following command: |
|
|
|
```bash |
|
git clone https://huggingface.co/datasets/bigcode/jupyter-code-text-pairs |
|
|
|
``` |
|
Once downloaded, you can use the provided DuckDB query to process the data as needed. |
|
|
|
## Conclusion |
|
|
|
This dataset provides a more integrated view of Jupyter notebooks by merging markdown, code, and output into a single format. The use of DuckDB demonstrates its capability to handle large datasets efficiently, making it an excellent tool for data transformation tasks. |