File size: 4,182 Bytes
74efda3
 
 
 
 
 
 
 
 
8a9cbe7
 
74efda3
 
beba69a
 
989b08a
beba69a
2da7562
beba69a
 
74efda3
 
 
eab8889
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
076a622
 
eab8889
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74efda3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
dataset_info:
  features:
  - name: question_id
    dtype: int64
  - name: title
    dtype: string
  - name: question_body
    dtype: string
  - name: question_type
    dtype: string
  splits:
  - name: train
    num_bytes: 2955868
    num_examples: 3100
  - name: test
    num_bytes: 389238
    num_examples: 344
  download_size: 1792309
  dataset_size: 3345106
---
# Dataset Card for "stackoverflow_question_types"

## Dataset Description

Recent research has taken a look into leveraging data available from StackOverflow (SO) to train large language models for programming-related tasks. 
However, users can ask a wide range of questions on stackoverflow; The "stackoverflow question types" is a dataset of manually annotated questions
posted on SO with an associated type. Following a previous [study](https://ieeexplore.ieee.org/document/6405249), each question was annotated with a type 
capturing the main concern of the user who posted the question. The questions were annotated with the given types: 

* *How to do it*: Providing a scenario and asking about how to implement it (sometimes with a given technology or API).
* *Debug/corrective*: Dealing with problems in the code under development, such as runtime errors and unexpected behaviour. 
* *Seeking different solutions*: The questioner has a working code yet is seeking a different approach to doing the job.
* *Need to know*: Questions regarding the possibility or availability of (doing) something. These questions normally show the lack of knowledge or uncertainty about some aspects of the technology (e.g. the presence of a feature in an API or a language).
* *Other*: Something else

We note the following distinction between the three first categories. 

- How to do it: the user wants to do "x", has no clear idea or solution/doesn't know how to do it -> wants any solution for solving "x".
- Debug: the user wants to do "x", has a clear idea/solution "y" but it is not working, and is seeking a correction to "y".
- Seeking-different-solution: the user wants to do "x", and found already a working solution "y", but is seeking an alternative "z". 

Naturally, some questions may have multiple concerns (i.e. could correspond to multiple categories). 
However, this dataset contains mainly questions for which we could assign a clear single category to each question. 
Currently, all questions annotated are a subset of the [stackoverflow_python](koutch/stackoverflow_python) dataset. 

### Languages

The currently annotated questions concern posts with the *python* tag. The questions are written in *English*.

## Dataset Structure

### Data Instances

[More Information Needed]

### Data Fields

- question_id: the unique id of the post
- question_body: the (HTML) content of the question
- question_type: the assigned category/type/label
  - "howto",
  - "debug",
  - "seeking",
  - "conceptual",
  - "other"

### Data Splits

[More Information Needed]


## Dataset Creation

### Annotations

#### Annotation process

Previous research looked into mining natural language-code pairs from stackoverflow. 
Two notable works yielded the [StaQC](https://arxiv.org/abs/1803.09371) and [ConaLA](https://arxiv.org/abs/1803.09371) datasets.
Parts of the dataset used a subset of the manual annotations provided by the authors of the papers (available at [staqc](https://huggingface.co/datasets/koutch/staqc), 
and [conala](https://huggingface.co/datasets/neulab/conala]). The questions were annotated as belonging to the "how to do it" category.

To ease the annotation procedure, we used the [argilla platform](https://docs.argilla.io/en/latest/index.html) 
and multiple iterations of [few-shot training with a SetFit model](https://docs.argilla.io/en/latest/tutorials/notebooks/labelling-textclassification-setfit-zeroshot.html#%F0%9F%A6%BE-Train-a-few-shot-SetFit-model).

## Considerations for Using the Data

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]
# Dataset Card for "stackoverflow_question_types"

[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)