Datasets:
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cdla-permissive-2.0
|
3 |
+
task_categories:
|
4 |
+
- text-generation
|
5 |
+
tags:
|
6 |
+
- code
|
7 |
+
- ocr
|
8 |
+
size_categories:
|
9 |
+
- 1M<n<10M
|
10 |
+
---
|
11 |
+
**SynthCodeNet** is a multimodal dataset created for training the **SmolDocling** model. It consists of over **9.3 million** synthetically generated image-text pairs, covering code snippets from **56** different programming languages. Text data was sourced from permissively licensed resources, while images were synthetically generated using LaTeX and Pygments to ensure visual diversity.
|
12 |
+
|
13 |
+
---
|
14 |
+
|
15 |
+
## Dataset Statistics
|
16 |
+
|
17 |
+
* **Total samples**: 9,334,257
|
18 |
+
|
19 |
+
* **Training set**: 8,400,838
|
20 |
+
* **Validation set**: 466,703
|
21 |
+
* **Test set**: 466,716
|
22 |
+
|
23 |
+
* **Modalities**: Image, Text
|
24 |
+
|
25 |
+
* **Image Generation**: Synthetic (LaTeX, Pygments)
|
26 |
+
|
27 |
+
### Programming Languages & Sample Counts
|
28 |
+
|
29 |
+
| Language | Samples | Language | Samples | Language | Samples |
|
30 |
+
| -------- | ------- | ---------- | ------- | ----------- | --------- |
|
31 |
+
| Ada | 20,094 | Dart | 20,415 | Matlab | 1,170 |
|
32 |
+
| Awk | 22,334 | Dockerfile | 99,459 | MoonScript | 6,237 |
|
33 |
+
| Bash | 98,950 | Elixir | 20,387 | Nim | 37,236 |
|
34 |
+
| C | 599,096 | Erlang | 20,039 | OCaml | 32,297 |
|
35 |
+
| C# | 303,720 | FORTRAN | 34,023 | ObjectiveC | 158,398 |
|
36 |
+
| C++ | 698,870 | Forth | 5,548 | Octave | 2,537 |
|
37 |
+
| CMake | 19,910 | Go | 333,722 | PHP | 249,566 |
|
38 |
+
| COBOL | 5,153 | HTML | 245,228 | Pascal | 28,254 |
|
39 |
+
| CSS | 236,596 | Haskell | 39,848 | Perl | 33,938 |
|
40 |
+
| Ceylon | 8,369 | Haxe | 20,070 | Prolog | 2,058 |
|
41 |
+
| Clojure | 20,765 | Java | 698,421 | Python | 1,797,063 |
|
42 |
+
| Crystal | 24,720 | JavaScript | 530,899 | Racket | 4,340 |
|
43 |
+
| Cuda | 142,344 | Julia | 29,681 | Ruby | 348,976 |
|
44 |
+
| Cython | 22,136 | Kotlin | 292,986 | Rust | 344,491 |
|
45 |
+
| D | 20,338 | Lisp | 29,749 | SML | 19,333 |
|
46 |
+
| Lua | 25,328 | SQL | 493,412 | YAML | 249,011 |
|
47 |
+
| Scala | 273,825 | Scheme | 23,242 | VisualBasic | 13,908 |
|
48 |
+
| Swift | 25,374 | TypeScript | 255,475 | XML | 246,209 |
|
49 |
+
| bc | 249 | dc | 1,713 | | |
|
50 |
+
|
51 |
+
---
|
52 |
+
|
53 |
+
## Data Format
|
54 |
+
|
55 |
+
Each dataset entry is structured as follows:
|
56 |
+
|
57 |
+
```json
|
58 |
+
{
|
59 |
+
"images": [PIL Image],
|
60 |
+
"texts": [
|
61 |
+
{
|
62 |
+
"assistant": "<loc_x0><loc_y0><loc_x1><loc_y1><_Language_>CODE_SNIPPET</code>",
|
63 |
+
"source": "SynthCodeNetNoImageTag",
|
64 |
+
"user": "<code>"
|
65 |
+
}
|
66 |
+
]
|
67 |
+
}
|
68 |
+
```
|
69 |
+
|
70 |
+
---
|
71 |
+
|
72 |
+
## Intended Use
|
73 |
+
|
74 |
+
* Training multimodal models for **document understanding**, specifically:
|
75 |
+
|
76 |
+
* **Code snippet extraction and transcription**
|
77 |
+
|
78 |
+
---
|
79 |
+
|
80 |
+
## Citation
|
81 |
+
|
82 |
+
If you use SynthCodeNet, please cite:
|
83 |
+
|
84 |
+
```bibtex
|
85 |
+
@article{nassar2025smoldocling,
|
86 |
+
title={SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion},
|
87 |
+
author={Nassar, Ahmed and Marafioti, Andres and Omenetti, Matteo and Lysak, Maksym and Livathinos, Nikolaos and Auer, Christoph and Morin, Lucas and de Lima, Rafael Teixeira and Kim, Yusik and Gurbuz, A Said and others},
|
88 |
+
journal={arXiv preprint arXiv:2503.11576},
|
89 |
+
year={2025}
|
90 |
+
}
|
91 |
+
```
|