LuKrO commited on
Commit
4c9c5b7
·
1 Parent(s): a3323ee

Working on readme

Browse files
Files changed (1) hide show
  1. README.md +47 -46
README.md CHANGED
@@ -9,63 +9,74 @@ tags:
9
  - code
10
  - source code
11
  - code readability
12
- - java
13
  pretty_name: Java Code Readability Combined Dataset
14
  size_categories:
15
  - n<1K
 
 
 
 
 
16
  ---
 
17
  # Java Code Readability Combined Dataset
18
 
19
- This dataset contains 421 java code snippets along with a readability score. The snippets are not split into train & test (& validation) set yet.
20
- The main goal of this repository is to train code readability classifiers for java source code.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  The dataset is a combination and normalization of three datasets:
22
 
23
  - **Buse**, Raymond PL, and Westley R. Weimer. "Learning a metric for code readability." IEEE Transactions on software engineering 36.4 (2009): 546-558.
24
  - **Dorn**, Jonathan. “A General Software Readability Model.” (2012).
25
  - **Scalabrino**, Simone, et al. "Automatically assessing code understandability: How far are we?." 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2017.
26
 
27
- The raw datasets can be downloaded [here](https://dibt.unimol.it/report/readability/).
28
- The datasets are generated by asking java programmers how readable they rate the given snippet.
29
- Participants could then answer based on a five point Likert scale, with 1 being very unreadable and 5 being very readable.
30
-
31
- We normalized the raw survey result by averaging for each java code snippet the readability rating over all participants.
32
- This results in a readability rating between 1.0 to 5.0 per snippet.
33
- The snippets with the average rating of the three authors were then combined into the given dataset.
34
-
35
 
36
  ## Dataset Details
37
 
38
  ### Dataset Description
39
 
40
  - **Curated by:** Buse Raymond PL, Dorn Jonathan, Sclabrino Simone
41
- - **Shared by [optional]:** Krodinger Lukas
42
  - **Language(s) (NLP):** Java
43
  - **License:** Unknown
44
 
45
- ### Dataset Sources [optional]
46
-
47
- - **Origin:** https://dibt.unimol.it/report/readability/
48
- - **Paper:**
49
- - **Buse**, Raymond PL, and Westley R. Weimer. "Learning a metric for code readability." IEEE Transactions on software engineering 36.4 (2009): 546-558.
50
- - **Dorn**, Jonathan. “A General Software Readability Model.” (2012).
51
- - **Scalabrino**, Simone, et al. "Improving code readability models with textual features." 2016 IEEE 24th International Conference on Program Comprehension (ICPC). IEEE, 2016.
52
-
53
  ## Uses
54
 
55
- The dataset can be used for training java code readability classifiers.
56
 
57
  ## Dataset Structure
58
 
59
  Each entry of the dataset consists of a **code_snippet** and a **score**.
60
- The code_snippet (String) is the code snippet that was rated in a study by multiple participants.
 
61
  The score (float) is the averaged rating score of all participants between 1.0 (very unreadable) and 5.0 (very readable).
62
 
 
 
63
  ## Dataset Creation
64
 
65
  ### Curation Rationale
66
 
67
  To advance code readability classification, the creation of datasets in this research field is of high importance.
68
- As a first step, we provide a combined and normalized version of existing datasets on a state-of-the-art platform.
69
  This makes access and ease of usage of this existing data easier.
70
 
71
  ### Source Data
@@ -73,24 +84,23 @@ This makes access and ease of usage of this existing data easier.
73
  The source of the data are the papers from Buse, Dorn and Scalabrino.
74
 
75
  Buse conducted a survey with 120 computer science students (17 from first year courses, 63 from second year courses, 30 third or fourth year courses, 10 graduated) on 100 code snippets.
76
- The code snippets were generated from five open source java projects.
77
 
78
  Dorn conducted a survey with 5000 participants (1800 with industry experience) on 360 code snippets from which 121 are Java code snippets.
79
  The used snippets were drawn from ten open source projects in the SourceForge repository (of March 15, 2012).
80
 
81
  Scalabrino conducted a survey with 9 computer science students on 200 new code snippets.
82
- The snippets were selected from four open source java projects: jUnit, Hibernate, jFreeChart and ArgoUML.
83
 
84
 
85
  #### Data Collection and Processing
86
 
87
- The dataset was preprocessed by averaging the readability rating for each code snippet.
88
- The code snippets and ratings were then combined from the three sources.
89
 
90
  Each of the three, Buse, Dorn and Sclabrino selected their code snippets based on different criteria.
91
  They had a different number of participants for their surveys.
92
- Note that those differences were ignored when combining the datasets.
93
- For example, one could argue that a code snippet that was rated by more participants might have a more accurate readability score and therefore is more valuable than one with less ratings.
94
  However, for simplicity those differences are ignored.
95
 
96
  Other than the selection (and generation) done by the original data source authors, no further processing is applied to the data.
@@ -105,8 +115,8 @@ The ratings of the code snippets are anonymized and averaged. Thus, no personal
105
 
106
  ## Bias, Risks, and Limitations
107
 
108
- The size of the dataset is very small.
109
- The ratings of code snippets were done mostly by computer science students, who do not represent the group of Java programmers in general.
110
 
111
  ### Recommendations
112
 
@@ -115,7 +125,6 @@ The dataset should be used to train **small** Java code readability classifiers.
115
  ## Citation
116
 
117
  **BibTeX:**
118
- Buse:
119
  ```bibtex
120
  @article{buse2009learning,
121
  title={Learning a metric for code readability},
@@ -127,22 +136,14 @@ Buse:
127
  year={2009},
128
  publisher={IEEE}
129
  }
130
- ```
131
 
132
- Dorn:
133
- ```bibtex
134
- @article{dorn2012general,
135
- title={A general software readability model},
136
- author={Dorn, Jonathan},
137
- journal={MCS Thesis available from (http://www. cs. virginia. edu/weimer/students/dorn-mcs-paper. pdf)},
138
- volume={5},
139
- pages={11--14},
140
- year={2012}
141
  }
142
- ```
143
 
144
- Scalabrino:
145
- ```bibtex
146
  @inproceedings{scalabrino2016improving,
147
  title={Improving code readability models with textual features},
148
  author={Scalabrino, Simone and Linares-Vasquez, Mario and Poshyvanyk, Denys and Oliveto, Rocco},
@@ -163,7 +164,7 @@ Scalabrino:
163
  Readability: We define readability as a subjective impression of the difficulty of code while trying to understand it.
164
 
165
  ## Dataset Card Authors
166
- Lukas Krodinger, [Chair of Software Engineering II](https://www.fim.uni-passau.de/en/chair-for-software-engineering-ii), University of Passau.
167
 
168
  ## Dataset Card Contact
169
  Feel free to contact me via [E-Mail](mailto:[email protected]) if you have any questions or remarks.
 
9
  - code
10
  - source code
11
  - code readability
12
+ - Java
13
  pretty_name: Java Code Readability Combined Dataset
14
  size_categories:
15
  - n<1K
16
+ features:
17
+ - name: code_snippet
18
+ dtype: string
19
+ - name: score
20
+ dtype: float
21
  ---
22
+
23
  # Java Code Readability Combined Dataset
24
 
25
+ This dataset contains **421 Java code snippets** along with a **readability score**.
26
+
27
+ You can download the dataset using Hugging Face:
28
+
29
+ ```python
30
+ from datasets import load_dataset
31
+ ds = load_dataset("LuKrO/code-readbility-combined")
32
+ ```
33
+
34
+ The dataset is structured as follows:
35
+
36
+ ```python
37
+ {
38
+ "code_snippet": ..., # Java source code snippet.
39
+ "score": ... # Readability score
40
+ }
41
+ ```
42
+
43
+ The main goal of this repository is to train code **readability classifiers for Java source code**.
44
  The dataset is a combination and normalization of three datasets:
45
 
46
  - **Buse**, Raymond PL, and Westley R. Weimer. "Learning a metric for code readability." IEEE Transactions on software engineering 36.4 (2009): 546-558.
47
  - **Dorn**, Jonathan. “A General Software Readability Model.” (2012).
48
  - **Scalabrino**, Simone, et al. "Automatically assessing code understandability: How far are we?." 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2017.
49
 
50
+ The raw datasets can be downloaded [here](https://dibt.unimol.it/report/readability/).
 
 
 
 
 
 
 
51
 
52
  ## Dataset Details
53
 
54
  ### Dataset Description
55
 
56
  - **Curated by:** Buse Raymond PL, Dorn Jonathan, Sclabrino Simone
57
+ - **Shared by:** Krodinger Lukas
58
  - **Language(s) (NLP):** Java
59
  - **License:** Unknown
60
 
 
 
 
 
 
 
 
 
61
  ## Uses
62
 
63
+ The dataset can be used for training Java code readability classifiers.
64
 
65
  ## Dataset Structure
66
 
67
  Each entry of the dataset consists of a **code_snippet** and a **score**.
68
+ The code_snippet (string) is the code snippet that was rated in a study by multiple participants.
69
+ Those could answer based on a five point Likert scale, with 1 being very unreadable and 5 being very readable.
70
  The score (float) is the averaged rating score of all participants between 1.0 (very unreadable) and 5.0 (very readable).
71
 
72
+ The snippets are **not** split into train and test (and validation) set.
73
+
74
  ## Dataset Creation
75
 
76
  ### Curation Rationale
77
 
78
  To advance code readability classification, the creation of datasets in this research field is of high importance.
79
+ As a first step, we provide a combined and normalized version of existing datasets on Hugging Face.
80
  This makes access and ease of usage of this existing data easier.
81
 
82
  ### Source Data
 
84
  The source of the data are the papers from Buse, Dorn and Scalabrino.
85
 
86
  Buse conducted a survey with 120 computer science students (17 from first year courses, 63 from second year courses, 30 third or fourth year courses, 10 graduated) on 100 code snippets.
87
+ The code snippets were generated from five open source Java projects.
88
 
89
  Dorn conducted a survey with 5000 participants (1800 with industry experience) on 360 code snippets from which 121 are Java code snippets.
90
  The used snippets were drawn from ten open source projects in the SourceForge repository (of March 15, 2012).
91
 
92
  Scalabrino conducted a survey with 9 computer science students on 200 new code snippets.
93
+ The snippets were selected from four open source Java projects: jUnit, Hibernate, jFreeChart and ArgoUML.
94
 
95
 
96
  #### Data Collection and Processing
97
 
98
+ The dataset was preprocessed by **averaging the readability rating** for each code snippet.
99
+ The code snippets and ratings were then **combined** from the three sources.
100
 
101
  Each of the three, Buse, Dorn and Sclabrino selected their code snippets based on different criteria.
102
  They had a different number of participants for their surveys.
103
+ One could argue that a code snippet that was rated by more participants might have a more accurate readability score and therefore is more valuable than one with less ratings.
 
104
  However, for simplicity those differences are ignored.
105
 
106
  Other than the selection (and generation) done by the original data source authors, no further processing is applied to the data.
 
115
 
116
  ## Bias, Risks, and Limitations
117
 
118
+ The size of the dataset is very **small**.
119
+ The ratings of code snippets were done mostly by **computer science students**, who do not represent the group of Java programmers in general.
120
 
121
  ### Recommendations
122
 
 
125
  ## Citation
126
 
127
  **BibTeX:**
 
128
  ```bibtex
129
  @article{buse2009learning,
130
  title={Learning a metric for code readability},
 
136
  year={2009},
137
  publisher={IEEE}
138
  }
 
139
 
140
+ @inproceedings{dorn2012general,
141
+ title={A General Software Readability Model},
142
+ author={Jonathan Dorn},
143
+ year={2012},
144
+ url={https://api.semanticscholar.org/CorpusID:14098740}
 
 
 
 
145
  }
 
146
 
 
 
147
  @inproceedings{scalabrino2016improving,
148
  title={Improving code readability models with textual features},
149
  author={Scalabrino, Simone and Linares-Vasquez, Mario and Poshyvanyk, Denys and Oliveto, Rocco},
 
164
  Readability: We define readability as a subjective impression of the difficulty of code while trying to understand it.
165
 
166
  ## Dataset Card Authors
167
+ Lukas Krodinger, [Chair of Software Engineering II](https://www.fim.uni-passau.de/en/chair-for-software-engineering-ii), [University of Passau](https://www.uni-passau.de/en/).
168
 
169
  ## Dataset Card Contact
170
  Feel free to contact me via [E-Mail](mailto:[email protected]) if you have any questions or remarks.