Adding first readme version
Browse files
README.md
CHANGED
@@ -1,3 +1,169 @@
|
|
1 |
---
|
2 |
license: unknown
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: unknown
|
3 |
+
task_categories:
|
4 |
+
- text-classification
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- readability
|
9 |
+
- code
|
10 |
+
- source code
|
11 |
+
- code readability
|
12 |
+
- java
|
13 |
+
pretty_name: Java Code Readability Combined Dataset
|
14 |
+
size_categories:
|
15 |
+
- n<1K
|
16 |
---
|
17 |
+
# Java Code Readability Combined Dataset
|
18 |
+
|
19 |
+
This dataset contains 421 java code snippets along with a readability score. The snippets are not split into train & test (& validation) set yet.
|
20 |
+
The main goal of this repository is to train code readability classifiers for java source code.
|
21 |
+
The dataset is a combination and normalization of three datasets:
|
22 |
+
|
23 |
+
- **Buse**, Raymond PL, and Westley R. Weimer. "Learning a metric for code readability." IEEE Transactions on software engineering 36.4 (2009): 546-558.
|
24 |
+
- **Dorn**, Jonathan. “A General Software Readability Model.” (2012).
|
25 |
+
- **Scalabrino**, Simone, et al. "Automatically assessing code understandability: How far are we?." 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2017.
|
26 |
+
|
27 |
+
The raw datasets can be downloaded [here](https://dibt.unimol.it/report/readability/).
|
28 |
+
The datasets are generated by asking java programmers how readable they rate the given snippet.
|
29 |
+
Participants could then answer based on a five point Likert scale, with 1 being very unreadable and 5 being very readable.
|
30 |
+
|
31 |
+
We normalized the raw survey result by averaging for each java code snippet the readability rating over all participants.
|
32 |
+
This results in a readability rating between 1.0 to 5.0 per snippet.
|
33 |
+
The snippets with the average rating of the three authors were then combined into the given dataset.
|
34 |
+
|
35 |
+
|
36 |
+
## Dataset Details
|
37 |
+
|
38 |
+
### Dataset Description
|
39 |
+
|
40 |
+
- **Curated by:** Buse Raymond PL, Dorn Jonathan, Sclabrino Simone
|
41 |
+
- **Shared by [optional]:** Krodinger Lukas
|
42 |
+
- **Language(s) (NLP):** Java
|
43 |
+
- **License:** Unknown
|
44 |
+
|
45 |
+
### Dataset Sources [optional]
|
46 |
+
|
47 |
+
- **Origin:** https://dibt.unimol.it/report/readability/
|
48 |
+
- **Paper:**
|
49 |
+
- **Buse**, Raymond PL, and Westley R. Weimer. "Learning a metric for code readability." IEEE Transactions on software engineering 36.4 (2009): 546-558.
|
50 |
+
- **Dorn**, Jonathan. “A General Software Readability Model.” (2012).
|
51 |
+
- **Scalabrino**, Simone, et al. "Improving code readability models with textual features." 2016 IEEE 24th International Conference on Program Comprehension (ICPC). IEEE, 2016.
|
52 |
+
|
53 |
+
## Uses
|
54 |
+
|
55 |
+
The dataset can be used for training java code readability classifiers.
|
56 |
+
|
57 |
+
## Dataset Structure
|
58 |
+
|
59 |
+
Each entry of the dataset consists of a **code_snippet** and a **score**.
|
60 |
+
The code_snippet (String) is the code snippet that was rated in a study by multiple participants.
|
61 |
+
The score (float) is the averaged rating score of all participants between 1.0 (very unreadable) and 5.0 (very readable).
|
62 |
+
|
63 |
+
## Dataset Creation
|
64 |
+
|
65 |
+
### Curation Rationale
|
66 |
+
|
67 |
+
To advance code readability classification, the creation of datasets in this research field is of high importance.
|
68 |
+
As a first step, we provide a combined and normalized version of existing datasets on a state-of-the-art platform.
|
69 |
+
This makes access and ease of usage of this existing data easier.
|
70 |
+
|
71 |
+
### Source Data
|
72 |
+
|
73 |
+
The source of the data are the papers from Buse, Dorn and Scalabrino.
|
74 |
+
|
75 |
+
Buse conducted a survey with 120 computer science students (17 from first year courses, 63 from second year courses, 30 third or fourth year courses, 10 graduated) on 100 code snippets.
|
76 |
+
The code snippets were generated from five open source java projects.
|
77 |
+
|
78 |
+
Dorn conducted a survey with 5000 participants (1800 with industry experience) on 360 code snippets from which 121 are Java code snippets.
|
79 |
+
The used snippets were drawn from ten open source projects in the SourceForge repository (of March 15, 2012).
|
80 |
+
|
81 |
+
Scalabrino conducted a survey with 9 computer science students on 200 new code snippets.
|
82 |
+
The snippets were selected from four open source java projects: jUnit, Hibernate, jFreeChart and ArgoUML.
|
83 |
+
|
84 |
+
|
85 |
+
#### Data Collection and Processing
|
86 |
+
|
87 |
+
The dataset was preprocessed by averaging the readability rating for each code snippet.
|
88 |
+
The code snippets and ratings were then combined from the three sources.
|
89 |
+
|
90 |
+
Each of the three, Buse, Dorn and Sclabrino selected their code snippets based on different criteria.
|
91 |
+
They had a different number of participants for their surveys.
|
92 |
+
Note that those differences were ignored when combining the datasets.
|
93 |
+
For example, one could argue that a code snippet that was rated by more participants might have a more accurate readability score and therefore is more valuable than one with less ratings.
|
94 |
+
However, for simplicity those differences are ignored.
|
95 |
+
|
96 |
+
Other than the selection (and generation) done by the original data source authors, no further processing is applied to the data.
|
97 |
+
|
98 |
+
#### Who are the source data producers?
|
99 |
+
|
100 |
+
The source data producers are the people that wrote the used open source Java projects, as well as the study participants, which were mostly computer science students.
|
101 |
+
|
102 |
+
#### Personal and Sensitive Information
|
103 |
+
|
104 |
+
The ratings of the code snippets are anonymized and averaged. Thus, no personal or sensitive information is contained in this dataset.
|
105 |
+
|
106 |
+
## Bias, Risks, and Limitations
|
107 |
+
|
108 |
+
The size of the dataset is very small.
|
109 |
+
The ratings of code snippets were done mostly by computer science students, who do not represent the group of Java programmers in general.
|
110 |
+
|
111 |
+
### Recommendations
|
112 |
+
|
113 |
+
The dataset should be used to train **small** Java code readability classifiers.
|
114 |
+
|
115 |
+
## Citation
|
116 |
+
|
117 |
+
**BibTeX:**
|
118 |
+
Buse:
|
119 |
+
```bibtex
|
120 |
+
@article{buse2009learning,
|
121 |
+
title={Learning a metric for code readability},
|
122 |
+
author={Buse, Raymond PL and Weimer, Westley R},
|
123 |
+
journal={IEEE Transactions on software engineering},
|
124 |
+
volume={36},
|
125 |
+
number={4},
|
126 |
+
pages={546--558},
|
127 |
+
year={2009},
|
128 |
+
publisher={IEEE}
|
129 |
+
}
|
130 |
+
```
|
131 |
+
|
132 |
+
Dorn:
|
133 |
+
```bibtex
|
134 |
+
@article{dorn2012general,
|
135 |
+
title={A general software readability model},
|
136 |
+
author={Dorn, Jonathan},
|
137 |
+
journal={MCS Thesis available from (http://www. cs. virginia. edu/weimer/students/dorn-mcs-paper. pdf)},
|
138 |
+
volume={5},
|
139 |
+
pages={11--14},
|
140 |
+
year={2012}
|
141 |
+
}
|
142 |
+
```
|
143 |
+
|
144 |
+
Scalabrino:
|
145 |
+
```bibtex
|
146 |
+
@inproceedings{scalabrino2016improving,
|
147 |
+
title={Improving code readability models with textual features},
|
148 |
+
author={Scalabrino, Simone and Linares-Vasquez, Mario and Poshyvanyk, Denys and Oliveto, Rocco},
|
149 |
+
booktitle={2016 IEEE 24th International Conference on Program Comprehension (ICPC)},
|
150 |
+
pages={1--10},
|
151 |
+
year={2016},
|
152 |
+
organization={IEEE}
|
153 |
+
}
|
154 |
+
```
|
155 |
+
|
156 |
+
**APA:**
|
157 |
+
- Buse, Raymond PL, and Westley R. Weimer. "Learning a metric for code readability." IEEE Transactions on software engineering 36.4 (2009): 546-558.
|
158 |
+
- Dorn, Jonathan. “A General Software Readability Model.” (2012).
|
159 |
+
- Scalabrino, Simone, et al. "Automatically assessing code understandability: How far are we?." 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2017.
|
160 |
+
|
161 |
+
## Glossary
|
162 |
+
|
163 |
+
Readability: We define readability as a subjective impression of the difficulty of code while trying to understand it.
|
164 |
+
|
165 |
+
## Dataset Card Authors
|
166 |
+
Lukas Krodinger, [Chair of Software Engineering II](https://www.fim.uni-passau.de/en/chair-for-software-engineering-ii), University of Passau.
|
167 |
+
|
168 |
+
## Dataset Card Contact
|
169 |
+
Feel free to contact me via [E-Mail](mailto:[email protected]) if you have any questions or remarks.
|