File size: 5,171 Bytes
74efda3 8a9cbe7 aecbb38 74efda3 b38b6a1 989b08a 47b767c 6b47376 b38b6a1 29e7bb4 74efda3 29e7bb4 eab8889 a451d08 a2c574c a451d08 eab8889 a451d08 eab8889 a451d08 eab8889 a451d08 eab8889 a451d08 eab8889 a2c574c eab8889 076a622 a451d08 eab8889 51669a6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
---
dataset_info:
features:
- name: question_id
dtype: int64
- name: title
dtype: string
- name: question_body
dtype: string
- name: question_type
dtype: string
- name: question_date
dtype: string
splits:
- name: train
num_bytes: 3433758
num_examples: 3449
- name: test
num_bytes: 12055
num_examples: 14
download_size: 0
dataset_size: 3445813
license: cc
task_categories:
- text-classification
language:
- en
tags:
- code
pretty_name: staqt
size_categories:
- 1K<n<10K
---
# Dataset Card for "stackoverflow_question_types"
## NOTE: the dataset is still currently under annotation
## Dataset Description
Recent research has taken a look into leveraging data available from StackOverflow (SO) to train large language models for programming-related tasks.
However, users can ask a wide range of questions on stackoverflow; The "stackoverflow question types" is a dataset of manually annotated questions
posted on SO with an associated type. Following a previous [study](https://ieeexplore.ieee.org/document/6405249), each question was annotated with a type
capturing the main concern of the user who posted the question. The questions were annotated with the given types:
* *Need to know*: Questions regarding the possibility or availability of (doing) something. These questions normally show the lack of knowledge or uncertainty about some aspects of the technology (e.g. the presence of a feature in an API or a language).
* *How to do it*: Providing a scenario and asking how to implement it (sometimes with a given technology or API).
* *Debug/corrective*: Dealing with problems in the code under development, such as runtime errors and unexpected behaviour.
* *Seeking different solutions*: The questioner has a working code yet seeks a different approach to doing the job.
* *Conceptual*: The question seeks to understand some aspects of programming (with or without using code examples)
* *Other*: a question related to another aspect of programming, or even non-related to programming.
### Remarks
For this dataset, we are mainly interested in questions related to *programming*.
For instance, for [this question](https://stackoverflow.com/questions/51142399/no-acceptable-c-compiler-found-in-path-installing-python-and-gcc),
the user is "trying to install Python-3.6.5 on a machine that does not have any package manager installed" and is facing issues.
Because it's not related to the concept of programming, we would classify it as "other" and not "debugging".
Moreover, we note the following conceptual distinctions between the different categories:
- Need to know: the user asks "is it possible to do x"
- How to do it: the user wants to do "x", knows it's possible, but has no clear idea or solution/doesn't know how to do it -> wants any solution for solving "x".
- Debug: the user wants to do "x", and has a clear idea/solution "y" but it is not working, and is seeking a correction to "y".
- Seeking-different-solution: the user wants to do "x", and has found already a working solution "y", but is seeking an alternative "z".
Sometimes, it's hard to truly understand the users' true intentions;
the separating line between each category will be minor and might be subject to interpretation.
Naturally, some questions may have multiple concerns (i.e. could correspond to multiple categories).
However, this dataset contains mainly questions for which we could assign a clear single category to each question.
Currently, all questions annotated are a subset of the [stackoverflow_python](koutch/stackoverflow_python) dataset.
### Languages
The currently annotated questions concern posts with the *python* tag. The questions are written in *English*.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- question_id: the unique id of the post
- question_body: the (HTML) content of the question
- question_type: the assigned category/type/label
- "needtoknow"
- "howto",
- "debug",
- "seeking",
- "conceptual",
- "other"
### Data Splits
[More Information Needed]
## Dataset Creation
### Annotations
#### Annotation process
Previous research looked into mining natural language-code pairs from stackoverflow.
Two notable works yielded the [StaQC](https://arxiv.org/abs/1803.09371) and [ConaLA](https://arxiv.org/abs/1803.09371) datasets.
Parts of the dataset used a subset of the manual annotations provided by the authors of the papers (available at [staqc](https://huggingface.co/datasets/koutch/staqc),
and [conala](https://huggingface.co/datasets/neulab/conala])). The questions were annotated as belonging to the "how to do it" category.
To ease the annotation procedure, we used the [argilla platform](https://docs.argilla.io/en/latest/index.html)
and multiple iterations of [few-shot training with a SetFit model](https://docs.argilla.io/en/latest/tutorials/notebooks/labelling-textclassification-setfit-zeroshot.html#%F0%9F%A6%BE-Train-a-few-shot-SetFit-model).
## Considerations for Using the Data
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed] |