|
--- |
|
dataset_info: |
|
features: |
|
- name: html_url |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: comments |
|
sequence: string |
|
- name: body |
|
dtype: string |
|
- name: number |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 10108947 |
|
num_examples: 2893 |
|
download_size: 4360781 |
|
dataset_size: 10108947 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
language: |
|
- en |
|
size_categories: |
|
- 1M<n<10M |
|
--- |
|
|
|
--- |
|
## Title: GitHub Issues Dataset for Semantic Search |
|
--- |
|
|
|
## Dataset Summary |
|
|
|
The differentiator is that this dataset is filtered and have selected fields that is it is a direct dataset which can be used to create sementic serach engine to resolve user quries on hf datasets repo, it doesn't have those issues which are PRs and also relevant fields only required to create engine. This dataset was created following the last two sections of Chapter 5 from the Hugging Face NLP course. The course outlined steps to create a dataset for building a semantic search system using GitHub issues from the Hugging Face `datasets` repository. However, during the creation process, several challenges were encountered, including handling null values and timestamp-related errors. This dataset was refined by focusing on fields relevant to semantic search, such as `html_url`, `title`, `body`, `comments`, and `issue number`. |
|
|
|
### Purpose |
|
|
|
The dataset supports the development of an asymmetric semantic search application, which involves short queries and longer paragraphs that address these queries, specifically for issues related to Hugging Face datasets. |
|
|
|
## Dataset Info |
|
|
|
- **Configuration Name**: default |
|
- **Splits**: |
|
- Train Split |
|
- **Number of Bytes**: 10108947 |
|
- **Number of Examples**: 2893 |
|
- **Download Size**: 4360781 bytes |
|
- **Total Dataset Size**: 10108947 bytes |
|
|
|
## Languages |
|
|
|
This dataset is entirely in English, encompassing all titles, bodies, and comments from the issues of the Hugging Face datasets. |
|
|
|
## Dataset Structure |
|
|
|
This is till now and they are the issues which are not pull requests. |
|
|
|
Dataset({ |
|
features: ['html_url', 'title', 'comments', 'body', 'number'], |
|
num_rows: 2893 |
|
}) |
|
|
|
## Data Instances |
|
|
|
An example data instance looks like this: |
|
|
|
{ |
|
"html_url": "https://github.com/huggingface/datasets/issues/7079", |
|
"title": "HfHubHTTPError: 500 Server Error: Internal Server Error for url:", |
|
"comments": ["same issue here. ... list of all comments csv"], |
|
"body": "### Describe the bug\n\nnewly uploaded datasets, since yesterday, yields an error.\r\n\r\n...body describing the issue", |
|
"number": 7079 |
|
} |
|
|
|
## Data Fields |
|
The dataset includes the following fields: |
|
|
|
html_url: URL of the GitHub issue (string). |
|
title: Title of the issue (string). |
|
comments: Sequence of comments made on the issue (list of strings). |
|
body: Detailed description of the issue (string). |
|
number: GitHub issue number (int64). |
|
|
|
|
|
To use this data in an environment where transformers are installed , the data can be imported with 2 lines of code - |
|
|
|
from datasets import load_dataset |
|
|
|
ds = load_dataset("amannagrawall002/github-issues") |
|
|
|
|
|
## Source Data |
|
The dataset is crafted from scratch using the GitHub REST API, focusing on open issues from the Hugging Face datasets repository. |
|
|
|
## Initial Data Collection and Normalization |
|
The creation of this dataset involved handling over 5000 issues, which exceeds the GitHub REST API's rate limit of 5000 requests per hour. Additionally, extracting comments required significant computational resources, highlighting the involvement of both automated processes and manual oversight. |
|
|
|
## Considerations |
|
This dataset is tailored for creating a semantic search application centered around GitHub issues. It does not contain data on pull requests, which may limit its applicability for tasks requiring such information. |
|
|
|
### Additional Notes |
|
|
|
Anyone deeply engaged with the Hugging Face NLP course might attempt to create this dataset. While it's accessible remotely as described in the course, this specific version focuses solely on supporting semantic search applications. Other uses may require a dataset with broader field coverage. |
|
This README is designed to be clear and informative, providing all necessary details about the dataset in a structured manner. |
|
|
|
## Author: Aman Agrawal |
|
|
|
## Contact: [email protected] |
|
|