amannagrawall002 commited on
Commit
8bcff5d
·
verified ·
1 Parent(s): 2293d35

Readme file updated

Browse files
Files changed (1) hide show
  1. README.md +47 -1
README.md CHANGED
@@ -26,4 +26,50 @@ language:
26
  - en
27
  size_categories:
28
  - 1M<n<10M
29
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  - en
27
  size_categories:
28
  - 1M<n<10M
29
+ ---
30
+
31
+ Dataset Card for GitHub Issues
32
+
33
+ Dataset Summary
34
+
35
+ The motivation to create this dataset comes from the last 2 sections of chapter 5 from hugging face NLP course , in which GitHub issues dataset have been created of hugging face repo named datasets , although the step to create the dataset to create semantic search was given in the last 2 sections of the chapter 5 , so following the code and instruction only when I tried to make the same dataset , I encountered with some errors which wasn't mentioned in the section particularly because of null values mentioned in some fields/issues where python was not expecting null values , and also some errors related to timestamp in the features namely created_at , timeline_url etc , so without loading or I should say without using load_dataset function , in the json files only I played around , I only used , filtered and selected the features which were use to create the semantic search application , namely html-url , issue number , body , comments , title. The steps that I followed were , first I filtered the issues which were actually the issues not pull requests , that is only selecting those lines in the original json line in which pull request : null was true , then selecting the above mentioned features from each line. Then I had filtered and selected data sufficient to create semantic search application , ignoring all other columns not useful for creating semantic serach application. This filtering and selection was done by writing code and logic and not using load_dataset() function , so to avoid timestamp and json to python errors that I have been encountering earlier on. Once saved in the local machine then used load_dataset function via "json" format to make it a hugging face dataset. Then after making it a hugging face dataset , then map function was used to extract the comments from the body , and the existing comment feature which was dtype : int64 earlier which use to tell number of comments in the issue , is now replaced with the list of comments which have strings of comments by the user.
36
+
37
+ This dataset can be used to create a asymmetric semantic search application which means (A short query and a longer paragraph that answers the query) centric to address user's query on the issues related to hugging face datasets , the details to concept of how it is done is mentioned in last section (FAISS index) of chapter 5.
38
+
39
+ Languages
40
+ The dataset is in English language , all the title , body , comments are collected in the English language on the issues of hugging face datasets.
41
+
42
+ Dataset Structure
43
+
44
+ This is till now and they are the issues which are not pull requests.
45
+
46
+ Dataset({
47
+ features: ['html_url', 'title', 'comments', 'body', 'number'],
48
+ num_rows: 2893
49
+ })
50
+
51
+
52
+
53
+ Data Instances
54
+ {"html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/7079","title":"HfHubHTTPError: 500 Server Error: Internal Server Error for url:","comments":["same issue here.
55
+ ....................................... list of all comments csv ],"body":"### Describe the bug\n\nnewly uploaded datasets, since yesterday, yields an error.\r\n\r\.............. body describing the issue```","number":7079}
56
+
57
+ Data Fields
58
+
59
+ The description of the fields are pretty straight-forward and after creating the above para becomes more clear , the dtype of all features have been mentioned below -
60
+
61
+ features={'html_url': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'comments': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'body': Value(dtype='string', id=None), 'number': Value(dtype='int64', id=None)}
62
+
63
+
64
+ Source Data
65
+ The source of the data , the dataset is made from scratch using the GitHub Rest API , and open source hugging face datasets issues.
66
+
67
+ Initial Data Collection and Normalization
68
+
69
+ The making of the dataset will take a lot of involvement , tackling errors , then the issues on this repo was more than 5000 , which is above par the limit request via github rest api which gives 5000 requests access per hour , then extracting the comments from body will also have some machine involvement , so involvement of both machine and person was required to make this dataset collected with an aim of creating a asymmetric semantic search application with respect to hugging face datasets repo.
70
+
71
+ Any person seriously involved with the NLP hugging face course would have tried to create this dataset , we can easily access the dataset remotely as mentioned in the course , what I have done different is only created the dataset centric to make semantic search application , so discarded other columns , if multiple purposes have to be served then this dataset might lack in those task , like now this dataset doesn't have the information of pull requests , so any task involving pull requests will fall short with this dataset.
72
+
73
+ Aman Agrawal
74
+
75
+ email - [email protected]