Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
loubnabnl HF staff commited on
Commit
b5648b9
·
verified ·
1 Parent(s): 77dffb8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md CHANGED
@@ -422,3 +422,94 @@ configs:
422
  - split: train
423
  path: TypeScript/train-*
424
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
422
  - split: train
423
  path: TypeScript/train-*
424
  ---
425
+ # 💻 Stack-Edu
426
+
427
+ Stack-Edu is a 125B token dataset of educational code filtered from [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2), precisely the curated training corpus of [StarCoder2](https://arxiv.org/abs/2402.19173) models denoted StarCoder2Data. It is intended for Language Models training.
428
+
429
+ This dataset was curated using a classifier-based filtering strategy, inspired by [📚 FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu), to retain only the highest-quality educational programming content.
430
+
431
+ ## Downloading the data
432
+ This dataset only contains the SWHIDs to download the code files and not the content of the files itself. The contents can be downloaded from Software Heritage's S3 bucket to ensure data compliance.
433
+ Please refer to [the-stack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train-full-ids) for the data license.
434
+
435
+ When running on a 16-core AWS `us-east-1` instance, this script takes ~6 hours to download the files:
436
+ ```python
437
+ import boto3
438
+ import gzip
439
+ from datasets import load_dataset
440
+ from botocore.exceptions import ClientError
441
+
442
+ num_proc = 16
443
+ s3 = boto3.client('s3')
444
+ bucket_name = "softwareheritage"
445
+
446
+ def download_contents(blob_id):
447
+ key = f"content/{blob_id}"
448
+ try:
449
+ obj = s3.get_object(Bucket=bucket_name, Key=key)
450
+ with gzip.GzipFile(fileobj=obj['Body']) as fin:
451
+ content = fin.read().decode("utf-8", errors="ignore")
452
+ return {"text": content, "download_success": True}
453
+ except ClientError as e:
454
+ if e.response['Error']['Code'] == 'NoSuchKey':
455
+ print(f"File not found: {key}")
456
+ return {"text": "", "download_success": False}
457
+ else:
458
+ raise
459
+
460
+ ds = load_dataset("HuggingFaceTB/stack-edu", "python", split="train", num_proc=num_proc)
461
+ ds = ds.map(download_contents, input_columns="blob_id", num_proc=num_proc)
462
+
463
+ # Filter out failed downloads
464
+ ds = ds.filter(lambda x: x['download_success'])
465
+
466
+ # Optionally, print the first example to verify the data
467
+ print(ds[0])
468
+ ```
469
+
470
+ ## Details
471
+ The table below shows the number of tokens in each programming language using [SmolLM2](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) tokenizer.
472
+
473
+ | Language | Stack-Edu (B tokens) |
474
+ |------------|----------------------|
475
+ | Python | 21.8 |
476
+ | Cpp | 16.0 |
477
+ | Markdown | 14.0 |
478
+ | C | 11.1 |
479
+ | JavaScript | 11.1 |
480
+ | Java | 42.1 |
481
+ | SQL | 9.62 |
482
+ | PHP | 9.07 |
483
+ | C-Sharp | 8.87 |
484
+ | TypeScript | 3.03 |
485
+ | Shell | 3.13 |
486
+ | Swift | 1.83 |
487
+ | Go | 1.80 |
488
+ | Rust | 1.75 |
489
+ | Ruby | 1.61 |
490
+
491
+
492
+ ## Dataset curation and results
493
+ To build Stack-Edu, we:
494
+
495
+ - Selected 15 largest programming languages from StarCoder2Data
496
+ - Trained 15 language-specific classifiers, using [StarEncoder](https://huggingface.co/bigcode/starencoder) model on synthetic annotations generated by Llama3-70B-Instruct. The classifiers for each language are available in this [collection](https://huggingface.co/collections/HuggingFaceTB/the-ultimate-collection-of-code-classifiers-67b5aa3eb8994a4b71453005).
497
+ - Applied a filtering threshold of 3 (out of 5) to retain highly educational content, except for Java, which performed best at threshold 2.
498
+
499
+ Stack-Edu, shows consistent improvement over StarCoder2data on all the programming languages on MultiPL-E benchmark.
500
+
501
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/GWnPgD0diMu0I8buK6mvG.png" width="600"/>
502
+
503
+ ### Citation Information
504
+
505
+ ```
506
+ @misc{allal2025smollm2smolgoesbig,
507
+ title={SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model},
508
+ author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Guilherme Penedo and Lewis Tunstall and Andrés Marafioti and Hynek Kydlíček and Agustín Piqueres Lajarín and Vaibhav Srivastav and Joshua Lochner and Caleb Fahlgren and Xuan-Son Nguyen and Clémentine Fourrier and Ben Burtenshaw and Hugo Larcher and Haojun Zhao and Cyril Zakka and Mathieu Morlon and Colin Raffel and Leandro von Werra and Thomas Wolf},
509
+ year={2025},
510
+ eprint={2502.02737},
511
+ archivePrefix={arXiv},
512
+ primaryClass={cs.CL},
513
+ url={https://arxiv.org/abs/2502.02737},
514
+ }
515
+ ```