jblitzar commited on
Commit
2a93771
Β·
verified Β·
1 Parent(s): d967483

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +216 -0
README.md CHANGED
@@ -22,4 +22,220 @@ configs:
22
  data_files:
23
  - split: train
24
  path: data/train-*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  data_files:
23
  - split: train
24
  path: data/train-*
25
+ annotations_creators:
26
+ - author
27
+ license:
28
+ - gpl-3.0
29
+ multilinguality:
30
+ - monolingual
31
+ pretty_name: GitHub-Python
32
+ dataset_name: github-python
33
+ dataset_type: code
34
+ tags:
35
+ - code
36
+ - python
37
+ size_categories:
38
+ - 100K<nβ©½1M
39
+ task_categories:
40
+ - text-generation
41
  ---
42
+
43
+ # GitHub-Python β€” Licensed & Elaborated Variants
44
+
45
+ This repository ships **two complementary Python-code corpora** extracted from
46
+ public GitHub:
47
+
48
+ - **Licensed Subset** – strictly _permissive-licensed_ files suitable for
49
+ commercial redistribution / model training (main corpus used in our
50
+ experiments).
51
+ - **Elaborated Collection** – a broader crawl that additionally contains files
52
+ under _copyleft_ or unclear licenses (GPL/AGPL/LGPL, etc.). Useful for
53
+ analysis or pre-training where license mixing is acceptable.
54
+
55
+ Both variants target **code-completion / generation** research.
56
+
57
+ ## Dataset at a glance
58
+
59
+ | | **Licensed Subset** | **Elaborated Collection** |
60
+ | ------------------- | ------------------- | ------------------------- |
61
+ | Files (.py) | 53,017 | 186,066 |
62
+ | Unique repositories | 16,447 | 59,852 |
63
+ | Repository owners | 12,515 | 43,517 |
64
+ | Compressed size | 732 MB | 2.4 GB \* |
65
+ | Vocabulary (tokens) | 443,431 | 443,431 † |
66
+ | License coverage | Permissive only | Mixed (perm. + copyleft) |
67
+ | Secrets redacted | βœ… | ⚠️ not guaranteed |
68
+ | Time window | β‰₯ 2015-01-01 | β‰₯ 2015-01-01 |
69
+
70
+ \* estimated – elaborated corpus is distributed as raw file list, not a single
71
+ text file.
72
+ † same tokenizer file is shared by both variants.
73
+
74
+ Numbers were obtained from the final redacted corpus and companion metadata.
75
+
76
+ ---
77
+
78
+ ## Dataset structure
79
+
80
+ ```
81
+ huggingface_dataset/
82
+ β”œβ”€ mega_licensed_corpus_redacted.txt # Licensed Subset – concatenated code
83
+ β”œβ”€ python_files.txt # Licensed Subset – raw file URLs
84
+ β”œβ”€ python_files_elaborated.txt # Elaborated Collection – raw file URLs
85
+ β”œβ”€ python_files_elaborated_metadata.csv # Elaborated Collection metadata
86
+ └─ custom_tokens_vocab.txt # `<token>\t<id>` vocabulary file
87
+ ```
88
+
89
+ ### File separator
90
+
91
+ Individual files are concatenated with the sentinel line:
92
+
93
+ ```
94
+ # <FILESEP>
95
+ ```
96
+
97
+ Anything following the sentinel until the next sentinel (or EOF) is the source
98
+ code of one file.
99
+
100
+ ---
101
+
102
+ ## Dataset variants
103
+
104
+ ### 1. Licensed Subset (`mega_licensed_corpus_redacted.txt`)
105
+
106
+ β€’ 53 K permissively-licensed files (MIT/BSD/Apache/ISC/Unlicense).
107
+ β€’ All API keys & credentials removed.
108
+ β€’ Ready for redistribution & commercial use (respect upstream NOTICE files).
109
+
110
+ ### 2. Elaborated Collection (`python_files_elaborated.txt`)
111
+
112
+ β€’ 186 K files from a much larger crawl.
113
+ β€’ Contains **GPL / LGPL / AGPL and other copyleft** licenses.
114
+ β€’ Shipped _as URL list_ + metadata CSV; you must download the files yourself
115
+ (`datasets.load_dataset` streaming, `wget`, etc.).
116
+ β€’ **No license filtering or secret-redaction performed** – use with caution.
117
+
118
+ When first loading the dataset, decide which variant aligns with your use case
119
+ (e.g. proprietary model training β†’ Licensed Subset only).
120
+
121
+ ---
122
+
123
+ ## Collection methodology
124
+
125
+ 1. **Repository discovery**
126
+
127
+ - Queried GitHub REST API for projects with **β‰₯ 10 stars**
128
+ (earlier iterations used 100+, later expanded for coverage).
129
+ - Only repositories with primary language _Python_ and last commit β‰₯ 2015.
130
+
131
+ 2. **File filtering**
132
+
133
+ - Retain files whose **size ∈ [1 KB, 100 KB]**.
134
+ - Exclude common build/packaging scripts (`setup.py`, `__init__.py`, etc.).
135
+
136
+ 3. **License compliance**
137
+
138
+ - Allowed: MIT, Apache-2.0, BSD-2/3-Clause, ISC, Unlicense.
139
+ - GPL, LGPL, AGPL and proprietary licenses were **excluded**.
140
+
141
+ 4. **Deduplication**
142
+
143
+ - Unique file SHA hashes; duplicates skipped.
144
+
145
+ 5. **Formatting & cleaning**
146
+
147
+ - Formatted with _autopep8_ to normalise whitespace.
148
+ - Custom script removed trailing whitespace & normalised newlines.
149
+
150
+ 6. **Secret redaction**
151
+ - `truffleHog` + custom regex pass removed >150 active credentials.
152
+ - Redacted corpus stored as `mega_licensed_corpus_redacted.txt`.
153
+
154
+ ---
155
+
156
+ ## Custom tokenisation
157
+
158
+ The accompanying `custom_tokens_vocab.txt` implements a **Python-aware
159
+ sub-token scheme**:
160
+
161
+ 1. Strip doc-strings & comments.
162
+ 2. Split on:
163
+ - Camel-Case boundaries (`Camel` β†’ `Camel`, `Case`)
164
+ - Underscores, spaces
165
+ - Indentation & newlines (preserved as `<newline>` token)
166
+ 3. Rare tokens (frequency < 10) were dropped β†’ 443 k vocabulary.
167
+
168
+ Example:
169
+
170
+ ```python
171
+ def helloWorld(value):
172
+ return value + 1
173
+ ```
174
+
175
+ tokenises to:
176
+
177
+ ```
178
+ def hello world ( value ) <newline> return value + 1 <newline>
179
+ ```
180
+
181
+ ---
182
+
183
+ ## Usage
184
+
185
+ ```python
186
+ from datasets import load_dataset
187
+
188
+ ds = load_dataset("jblitzar/github-python", split="train")
189
+
190
+ print(ds[0]["code"][:300]) # raw source code
191
+ ```
192
+
193
+ If you prefer token level examples (small reasons: memory), map the tokenizer:
194
+
195
+ ```python
196
+ from tokenizers import Tokenizer
197
+ tok = Tokenizer.from_file("custom_tokens_vocab.txt")
198
+
199
+ def encode(ex):
200
+ ex["input_ids"] = tok.encode(ex["code"]).ids
201
+ return ex
202
+
203
+ ds = ds.map(encode, remove_columns=["code"])
204
+ ```
205
+
206
+ ---
207
+
208
+ ## Ethical considerations & limitations
209
+
210
+ - **Licenses respected** – only permissive licenses included; retain NOTICE
211
+ files when redistributing derivative works.
212
+ - **Secrets removed** – automated & manual audits performed, yet users **must
213
+ not assume zero secrets**; re-audit before public deployments.
214
+ - **Code quality** – projects vary in style & correctness. Generated models
215
+ may replicate bugs or vulnerable patterns.
216
+
217
+ ---
218
+
219
+ ## Citation
220
+
221
+ If you use this dataset, please cite:
222
+
223
+ ```
224
+ @misc{github-python-2024,
225
+ author = {JBlitzar},
226
+ title = {GitHub-Python: A Permissively Licensed Corpus of Python Code},
227
+ year = {2024},
228
+ howpublished = {\url{https://huggingface.co/datasets/jblitzar/github-python}},
229
+ note = {Version 1.0}
230
+ }
231
+ ```
232
+
233
+ ---
234
+
235
+ ## License
236
+
237
+ Dataset card and aggregation scripts: **GPLv3**.
238
+ Each code snippet remains under its **original repository license** (MIT,
239
+ Apache-2.0, BSD, ISC, etc.). Users must comply with upstream notices when
240
+ redistributing code or derivatives.
241
+