Update README.md
Browse files
README.md
CHANGED
@@ -17,3 +17,21 @@ configs:
|
|
17 |
- split: train
|
18 |
path: data/train-*
|
19 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
- split: train
|
18 |
path: data/train-*
|
19 |
---
|
20 |
+
|
21 |
+
# Merged Jupyter Notebooks Dataset
|
22 |
+
|
23 |
+
## Introduction
|
24 |
+
|
25 |
+
This dataset is a transformed version of the [Jupyter Code-Text Pairs](https://huggingface.co/datasets/bigcode/jupyter-code-text-pairs) dataset. The original dataset contains markdown, code, and output pairs extracted from Jupyter notebooks. This transformation merges these components into a single, cohesive format that resembles a Jupyter notebook, making it easier to analyze and understand the flow of information.
|
26 |
+
|
27 |
+
## Dataset Details
|
28 |
+
|
29 |
+
### Source
|
30 |
+
|
31 |
+
The original dataset is sourced from the Hugging Face Hub, specifically the [bigcode/jupyter-code-text-pairs](https://huggingface.co/datasets/bigcode/jupyter-code-text-pairs) dataset. It contains pairs of markdown, code, and output from Jupyter notebooks.
|
32 |
+
|
33 |
+
### Transformation Process
|
34 |
+
|
35 |
+
Using the flexibility and efficiency of DuckDB, I processed the entire dataset without the need for heavy hardware. DuckDB's ability to handle large datasets efficiently allowed me to concatenate the markdown, code, and output for each notebook path into a single string, simulating the structure of a Jupyter notebook.
|
36 |
+
|
37 |
+
The transformation was performed using the following DuckDB query:
|