jblitzar commited on
Commit
0ce452d
·
verified ·
1 Parent(s): 2ce5aed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -199
README.md CHANGED
@@ -39,202 +39,4 @@ task_categories:
39
  - text-generation
40
  ---
41
 
42
- # GitHub-Python — Licensed & Elaborated Variants
43
-
44
- This repository ships **two complementary Python-code corpora** extracted from
45
- public GitHub:
46
-
47
- - **Licensed Subset** – strictly _permissive-licensed_ files suitable for
48
- commercial redistribution / model training (main corpus used in our
49
- experiments).
50
- - **Elaborated Collection** – a broader crawl that additionally contains files
51
- under _copyleft_ or unclear licenses (GPL/AGPL/LGPL, etc.). Useful for
52
- analysis or pre-training where license mixing is acceptable.
53
-
54
- Both variants target **code-completion / generation** research.
55
-
56
- ## Dataset at a glance
57
-
58
- | | **Licensed Subset** | **Elaborated Collection** |
59
- | ------------------- | ------------------- | ------------------------- |
60
- | Files (.py) | 53,017 | 186,066 |
61
- | Unique repositories | 16,447 | 59,852 |
62
- | Repository owners | 12,515 | 43,517 |
63
- | Compressed size | 732 MB | 2.4 GB \* |
64
- | Vocabulary (tokens) | 443,431 | 443,431 † |
65
- | License coverage | Permissive only | Mixed (perm. + copyleft) |
66
- | Secrets redacted | ✅ | ⚠️ not guaranteed |
67
- | Time window | ≥ 2015-01-01 | ≥ 2015-01-01 |
68
-
69
- \* estimated – elaborated corpus is distributed as raw file list, not a single
70
- text file.
71
- † same tokenizer file is shared by both variants.
72
-
73
- Numbers were obtained from the final redacted corpus and companion metadata.
74
-
75
- ---
76
-
77
- ## Dataset structure
78
-
79
- ```
80
- huggingface_dataset/
81
- ├─ mega_licensed_corpus_redacted.txt # Licensed Subset – concatenated code
82
- ├─ python_files.txt # Licensed Subset – raw file URLs
83
- ├─ python_files_elaborated.txt # Elaborated Collection – raw file URLs
84
- ├─ python_files_elaborated_metadata.csv # Elaborated Collection metadata
85
- └─ custom_tokens_vocab.txt # `<token>\t<id>` vocabulary file
86
- ```
87
-
88
- ### File separator
89
-
90
- Individual files are concatenated with the sentinel line:
91
-
92
- ```
93
- # <FILESEP>
94
- ```
95
-
96
- Anything following the sentinel until the next sentinel (or EOF) is the source
97
- code of one file.
98
-
99
- ---
100
-
101
- ## Dataset variants
102
-
103
- ### 1. Licensed Subset (`mega_licensed_corpus_redacted.txt`)
104
-
105
- • 53 K permissively-licensed files (MIT/BSD/Apache/ISC/Unlicense).
106
- • All API keys & credentials removed.
107
- • Ready for redistribution & commercial use (respect upstream NOTICE files).
108
-
109
- ### 2. Elaborated Collection (`python_files_elaborated.txt`)
110
-
111
- • 186 K files from a much larger crawl.
112
- • Contains **GPL / LGPL / AGPL and other copyleft** licenses.
113
- • Shipped _as URL list_ + metadata CSV; you must download the files yourself
114
- (`datasets.load_dataset` streaming, `wget`, etc.).
115
- • **No license filtering or secret-redaction performed** – use with caution.
116
-
117
- When first loading the dataset, decide which variant aligns with your use case
118
- (e.g. proprietary model training → Licensed Subset only).
119
-
120
- ---
121
-
122
- ## Collection methodology
123
-
124
- 1. **Repository discovery**
125
-
126
- - Queried GitHub REST API for projects with **≥ 10 stars**
127
- (earlier iterations used 100+, later expanded for coverage).
128
- - Only repositories with primary language _Python_ and last commit ≥ 2015.
129
-
130
- 2. **File filtering**
131
-
132
- - Retain files whose **size ∈ [1 KB, 100 KB]**.
133
- - Exclude common build/packaging scripts (`setup.py`, `__init__.py`, etc.).
134
-
135
- 3. **License compliance**
136
-
137
- - Allowed: MIT, Apache-2.0, BSD-2/3-Clause, ISC, Unlicense.
138
- - GPL, LGPL, AGPL and proprietary licenses were **excluded**.
139
-
140
- 4. **Deduplication**
141
-
142
- - Unique file SHA hashes; duplicates skipped.
143
-
144
- 5. **Formatting & cleaning**
145
-
146
- - Formatted with _autopep8_ to normalise whitespace.
147
- - Custom script removed trailing whitespace & normalised newlines.
148
-
149
- 6. **Secret redaction**
150
- - `truffleHog` + custom regex pass removed >150 active credentials.
151
- - Redacted corpus stored as `mega_licensed_corpus_redacted.txt`.
152
-
153
- ---
154
-
155
- ## Custom tokenisation
156
-
157
- The accompanying `custom_tokens_vocab.txt` implements a **Python-aware
158
- sub-token scheme**:
159
-
160
- 1. Strip doc-strings & comments.
161
- 2. Split on:
162
- - Camel-Case boundaries (`Camel` → `Camel`, `Case`)
163
- - Underscores, spaces
164
- - Indentation & newlines (preserved as `<newline>` token)
165
- 3. Rare tokens (frequency < 10) were dropped → 443 k vocabulary.
166
-
167
- Example:
168
-
169
- ```python
170
- def helloWorld(value):
171
- return value + 1
172
- ```
173
-
174
- tokenises to:
175
-
176
- ```
177
- def hello world ( value ) <newline> return value + 1 <newline>
178
- ```
179
-
180
- ---
181
-
182
- ## Usage
183
-
184
- ```python
185
- from datasets import load_dataset
186
-
187
- ds = load_dataset("jblitzar/github-python", split="train")
188
-
189
- print(ds[0]["code"][:300]) # raw source code
190
- ```
191
-
192
- If you prefer token level examples (small reasons: memory), map the tokenizer:
193
-
194
- ```python
195
- from tokenizers import Tokenizer
196
- tok = Tokenizer.from_file("custom_tokens_vocab.txt")
197
-
198
- def encode(ex):
199
- ex["input_ids"] = tok.encode(ex["code"]).ids
200
- return ex
201
-
202
- ds = ds.map(encode, remove_columns=["code"])
203
- ```
204
-
205
- ---
206
-
207
- ## Ethical considerations & limitations
208
-
209
- - **Licenses respected** – only permissive licenses included; retain NOTICE
210
- files when redistributing derivative works.
211
- - **Secrets removed** – automated & manual audits performed, yet users **must
212
- not assume zero secrets**; re-audit before public deployments.
213
- - **Code quality** – projects vary in style & correctness. Generated models
214
- may replicate bugs or vulnerable patterns.
215
-
216
- ---
217
-
218
- ## Citation
219
-
220
- If you use this dataset, please cite:
221
-
222
- ```
223
- @misc{github-python-2024,
224
- author = {JBlitzar},
225
- title = {GitHub-Python: A Permissively Licensed Corpus of Python Code},
226
- year = {2024},
227
- howpublished = {\url{https://huggingface.co/datasets/jblitzar/github-python}},
228
- note = {Version 1.0}
229
- }
230
- ```
231
-
232
- ---
233
-
234
- ## License
235
-
236
- Dataset card and aggregation scripts: **GPLv3**.
237
- Each code snippet remains under its **original repository license** (MIT,
238
- Apache-2.0, BSD, ISC, etc.). Users must comply with upstream notices when
239
- redistributing code or derivatives.
240
-
 
39
  - text-generation
40
  ---
41
 
42
+ https://huggingface.co/datasets/jblitzar/github-python/blob/main/README.md