Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
Libraries:
Datasets
Dask
License:
ArtifactAI commited on
Commit
f847ddf
·
1 Parent(s): 4f02ffc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -8
README.md CHANGED
@@ -1,9 +1,4 @@
1
  ---
2
- configs:
3
- - config_name: default
4
- data_files:
5
- - split: train
6
- path: data/train-*
7
  dataset_info:
8
  features:
9
  - name: repo
@@ -24,9 +19,81 @@ dataset_info:
24
  - name: train
25
  num_bytes: 12984199778
26
  num_examples: 1415924
27
- download_size: 4029701352
28
  dataset_size: 12984199778
 
 
 
 
 
 
 
 
29
  ---
30
- # Dataset Card for "g_arxiv_python_research_code"
31
 
32
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: repo
 
19
  - name: train
20
  num_bytes: 12984199778
21
  num_examples: 1415924
22
+ download_size: 4073853616
23
  dataset_size: 12984199778
24
+ license: bigcode-openrail-m
25
+ task_categories:
26
+ - text-generation
27
+ language:
28
+ - en
29
+ pretty_name: arxiv_python_research_code
30
+ size_categories:
31
+ - 1B<n<10B
32
  ---
33
+ # Dataset Card for "ArtifactAI/arxiv_python_research_code"
34
 
35
+ ## Dataset Description
36
+
37
+ https://huggingface.co/datasets/ArtifactAI/arxiv_python_research_code
38
+
39
+
40
+ ### Dataset Summary
41
+
42
+ ArtifactAI/arxiv_python_research_code contains over 4.13GB of source code files referenced strictly in ArXiv papers. The dataset serves as a curated dataset for Code LLMs.
43
+
44
+ ### How to use it
45
+ ```python
46
+ from datasets import load_dataset
47
+
48
+ # full dataset (4.13GB of data)
49
+ ds = load_dataset("ArtifactAI/arxiv_python_research_code", split="train")
50
+
51
+ # dataset streaming (will only download the data as needed)
52
+ ds = load_dataset("ArtifactAI/arxiv_python_research_code", streaming=True, split="train")
53
+ for sample in iter(ds): print(sample["code"])
54
+ ```
55
+
56
+ ## Dataset Structure
57
+ ### Data Instances
58
+ Each data instance corresponds to one file. The content of the file is in the `code` feature, and other features (`repo`, `file`, etc.) provide some metadata.
59
+ ### Data Fields
60
+ - `repo` (string): code repository name.
61
+ - `file` (string): file path in the repository.
62
+ - `code` (string): code within the file.
63
+ - `file_length`: (integer): number of characters in the file.
64
+ - `avg_line_length`: (float): the average line-length of the file.
65
+ - `max_line_length`: (integer): the maximum line-length of the file.
66
+ - `extension_type`: (string): file extension.
67
+
68
+ ### Data Splits
69
+
70
+ The dataset has no splits and all data is loaded as train split by default.
71
+
72
+ ## Dataset Creation
73
+
74
+ ### Source Data
75
+ #### Initial Data Collection and Normalization
76
+ 34,099 active GitHub repository names were extracted from [ArXiv](https://arxiv.org/) papers from its inception through July 21st, 2023 totaling 773G of compressed github repositories.
77
+
78
+ These repositories were then filtered, and the code from each '.py' file extension was extracted into 1.4 million files.
79
+
80
+ #### Who are the source language producers?
81
+
82
+ The source (code) language producers are users of GitHub that created unique repository
83
+
84
+ ### Personal and Sensitive Information
85
+ The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub.
86
+
87
+ ## Additional Information
88
+
89
+ ### Dataset Curators
90
+ Matthew Kenney, Artifact AI, [email protected]
91
+
92
+ ### Citation Information
93
+ ```
94
+ @misc{arxiv_python_research_code,
95
+ title={arxiv_python_research_code},
96
+ author={Matthew Kenney},
97
+ year={2023}
98
+ }
99
+ ```