amannagrawall002 commited on
Commit
2293d35
·
verified ·
1 Parent(s): c53609d

Update README.md

Browse files

Dataset Card for GitHub Issues

Dataset Summary

The motivation to create this dataset comes from the last 2 sections of chapter 5 from hugging face NLP course , in which GitHub issues dataset have been created of hugging face repo named datasets , although the step to create the dataset to create semantic search was given in the last 2 sections of the chapter 5 , so following the code and instruction only when I tried to make the same dataset , I encountered with some errors which wasn't mentioned in the section particularly because of null values mentioned in some fields/issues where python was not expecting null values , and also some errors related to timestamp in the features namely created_at , timeline_url etc , so without loading or I should say without using load_dataset function , in the json files only I played around , I only used , filtered and selected the features which were use to create the semantic search application , namely html-url , issue number , body , comments , title. The steps that I followed were , first I filtered the issues which were actually the issues not pull requests , that is only selecting those lines in the original json line in which pull request : null was true , then selecting the above mentioned features from each line. Then I had filtered and selected data sufficient to create semantic search application , ignoring all other columns not useful for creating semantic serach application. This filtering and selection was done by writing code and logic and not using load_dataset() function , so to avoid timestamp and json to python errors that I have been encountering earlier on. Once saved in the local machine then used load_dataset function via "json" format to make it a hugging face dataset. Then after making it a hugging face dataset , then map function was used to extract the comments from the body , and the existing comment feature which was dtype : int64 earlier which use to tell number of comments in the issue , is now replaced with the list of comments which have strings of comments by the user.

This dataset can be used to create a asymmetric semantic search application which means (A short query and a longer paragraph that answers the query) centric to address user's query on the issues related to hugging face datasets , the details to concept of how it is done is mentioned in last section (FAISS index) of chapter 5.

Languages
The dataset is in English language , all the title , body , comments are collected in the English language on the issues of hugging face datasets.

Dataset Structure

This is till now and they are the issues which are not pull requests.

Dataset({
features: ['html_url', 'title', 'comments', 'body', 'number'],
num_rows: 2893
})



Data Instances
{"html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/7079","title":"HfHubHTTPError: 500 Server Error: Internal Server Error for url:","comments":["same issue here.

@albertvillanova


@lhoestq
....................................... list of all comments csv ],"body":"### Describe the bug\n\nnewly uploaded datasets, since yesterday, yields an error.\r\n\r\.............. body describing the issue```","number":7079}

Data Fields

The description of the fields are pretty straight-forward and after creating the above para becomes more clear , the dtype of all features have been mentioned below -

features={'html_url': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'comments': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'body': Value(dtype='string', id=None), 'number': Value(dtype='int64', id=None)}


Source Data
The source of the data , the dataset is made from scratch using the GitHub Rest API , and open source hugging face datasets issues.

Initial Data Collection and Normalization

The making of the dataset will take a lot of involvement , tackling errors , then the issues on this repo was more than 5000 , which is above par the limit request via github rest api which gives 5000 requests access per hour , then extracting the comments from body will also have some machine involvement , so involvement of both machine and person was required to make this dataset collected with an aim of creating a asymmetric semantic search application with respect to hugging face datasets repo.

Any person seriously involved with the NLP hugging face course would have tried to create this dataset , we can easily access the dataset remotely as mentioned in the course , what I have done different is only created the dataset centric to make semantic search application , so discarded other columns , if multiple purposes have to be served then this dataset might lack in those task , like now this dataset doesn't have the information of pull requests , so any task involving pull requests will fall short with this dataset.

Aman Agrawal

email - [email protected]

Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -22,4 +22,8 @@ configs:
22
  data_files:
23
  - split: train
24
  path: data/train-*
25
- ---
 
 
 
 
 
22
  data_files:
23
  - split: train
24
  path: data/train-*
25
+ language:
26
+ - en
27
+ size_categories:
28
+ - 1M<n<10M
29
+ ---