amannagrawall002 commited on
Commit
5abd5f6
·
verified ·
1 Parent(s): 8bcff5d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -18
README.md CHANGED
@@ -28,18 +28,35 @@ size_categories:
28
  - 1M<n<10M
29
  ---
30
 
31
- Dataset Card for GitHub Issues
 
 
 
 
 
 
 
 
32
 
33
- Dataset Summary
34
 
35
- The motivation to create this dataset comes from the last 2 sections of chapter 5 from hugging face NLP course , in which GitHub issues dataset have been created of hugging face repo named datasets , although the step to create the dataset to create semantic search was given in the last 2 sections of the chapter 5 , so following the code and instruction only when I tried to make the same dataset , I encountered with some errors which wasn't mentioned in the section particularly because of null values mentioned in some fields/issues where python was not expecting null values , and also some errors related to timestamp in the features namely created_at , timeline_url etc , so without loading or I should say without using load_dataset function , in the json files only I played around , I only used , filtered and selected the features which were use to create the semantic search application , namely html-url , issue number , body , comments , title. The steps that I followed were , first I filtered the issues which were actually the issues not pull requests , that is only selecting those lines in the original json line in which pull request : null was true , then selecting the above mentioned features from each line. Then I had filtered and selected data sufficient to create semantic search application , ignoring all other columns not useful for creating semantic serach application. This filtering and selection was done by writing code and logic and not using load_dataset() function , so to avoid timestamp and json to python errors that I have been encountering earlier on. Once saved in the local machine then used load_dataset function via "json" format to make it a hugging face dataset. Then after making it a hugging face dataset , then map function was used to extract the comments from the body , and the existing comment feature which was dtype : int64 earlier which use to tell number of comments in the issue , is now replaced with the list of comments which have strings of comments by the user.
36
 
37
- This dataset can be used to create a asymmetric semantic search application which means (A short query and a longer paragraph that answers the query) centric to address user's query on the issues related to hugging face datasets , the details to concept of how it is done is mentioned in last section (FAISS index) of chapter 5.
38
 
39
- Languages
40
- The dataset is in English language , all the title , body , comments are collected in the English language on the issues of hugging face datasets.
 
 
 
 
 
41
 
42
- Dataset Structure
 
 
 
 
43
 
44
  This is till now and they are the issues which are not pull requests.
45
 
@@ -50,26 +67,43 @@ num_rows: 2893
50
 
51
 
52
 
53
- Data Instances
54
- {"html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/7079","title":"HfHubHTTPError: 500 Server Error: Internal Server Error for url:","comments":["same issue here.
55
- ....................................... list of all comments csv ],"body":"### Describe the bug\n\nnewly uploaded datasets, since yesterday, yields an error.\r\n\r\.............. body describing the issue```","number":7079}
56
 
57
- Data Fields
 
 
 
 
 
 
 
 
 
 
 
58
 
59
- The description of the fields are pretty straight-forward and after creating the above para becomes more clear , the dtype of all features have been mentioned below -
 
60
 
61
- features={'html_url': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'comments': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'body': Value(dtype='string', id=None), 'number': Value(dtype='int64', id=None)}
 
 
 
 
62
 
63
 
64
  Source Data
65
- The source of the data , the dataset is made from scratch using the GitHub Rest API , and open source hugging face datasets issues.
66
 
67
  Initial Data Collection and Normalization
 
68
 
69
- The making of the dataset will take a lot of involvement , tackling errors , then the issues on this repo was more than 5000 , which is above par the limit request via github rest api which gives 5000 requests access per hour , then extracting the comments from body will also have some machine involvement , so involvement of both machine and person was required to make this dataset collected with an aim of creating a asymmetric semantic search application with respect to hugging face datasets repo.
 
70
 
71
- Any person seriously involved with the NLP hugging face course would have tried to create this dataset , we can easily access the dataset remotely as mentioned in the course , what I have done different is only created the dataset centric to make semantic search application , so discarded other columns , if multiple purposes have to be served then this dataset might lack in those task , like now this dataset doesn't have the information of pull requests , so any task involving pull requests will fall short with this dataset.
 
72
 
73
- Aman Agrawal
74
 
75
- email - amannagrawall002@gmail.com
 
 
28
  - 1M<n<10M
29
  ---
30
 
31
+ ---
32
+ title: GitHub Issues Dataset for Semantic Search
33
+ author: Aman Agrawal
34
+ contact: [email protected]
35
+ ---
36
+
37
+ ## Dataset Summary
38
+
39
+ This dataset was created following the last two sections of Chapter 5 from the Hugging Face NLP course. The course outlined steps to create a dataset for building a semantic search system using GitHub issues from the Hugging Face `datasets` repository. However, during the creation process, several challenges were encountered, including handling null values and timestamp-related errors. This dataset was refined by focusing on fields relevant to semantic search, such as `html_url`, `title`, `body`, `comments`, and `issue number`.
40
 
41
+ ### Purpose
42
 
43
+ The dataset supports the development of an asymmetric semantic search application, which involves short queries and longer paragraphs that address these queries, specifically for issues related to Hugging Face datasets.
44
 
45
+ ## Dataset Info
46
 
47
+ - **Configuration Name**: default
48
+ - **Splits**:
49
+ - Train Split
50
+ - **Number of Bytes**: 10108947
51
+ - **Number of Examples**: 2893
52
+ - **Download Size**: 4360781 bytes
53
+ - **Total Dataset Size**: 10108947 bytes
54
 
55
+ ## Languages
56
+
57
+ This dataset is entirely in English, encompassing all titles, bodies, and comments from the issues of the Hugging Face datasets.
58
+
59
+ ## Dataset Structure
60
 
61
  This is till now and they are the issues which are not pull requests.
62
 
 
67
 
68
 
69
 
 
 
 
70
 
71
+ ## Data Instances
72
+
73
+ An example data instance looks like this:
74
+
75
+ ```json
76
+ {
77
+ "html_url": "https://github.com/huggingface/datasets/issues/7079",
78
+ "title": "HfHubHTTPError: 500 Server Error: Internal Server Error for url:",
79
+ "comments": ["same issue here. ... list of all comments csv"],
80
+ "body": "### Describe the bug\n\nnewly uploaded datasets, since yesterday, yields an error.\r\n\r\n...body describing the issue",
81
+ "number": 7079
82
+ }
83
 
84
+ Data Fields
85
+ The dataset includes the following fields:
86
 
87
+ html_url: URL of the GitHub issue (string).
88
+ title: Title of the issue (string).
89
+ comments: Sequence of comments made on the issue (list of strings).
90
+ body: Detailed description of the issue (string).
91
+ number: GitHub issue number (int64).
92
 
93
 
94
  Source Data
95
+ The dataset is crafted from scratch using the GitHub REST API, focusing on open issues from the Hugging Face datasets repository.
96
 
97
  Initial Data Collection and Normalization
98
+ The creation of this dataset involved handling over 5000 issues, which exceeds the GitHub REST API's rate limit of 5000 requests per hour. Additionally, extracting comments required significant computational resources, highlighting the involvement of both automated processes and manual oversight.
99
 
100
+ Considerations
101
+ This dataset is tailored for creating a semantic search application centered around GitHub issues. It does not contain data on pull requests, which may limit its applicability for tasks requiring such information.
102
 
103
+ vbnet
104
+ Copy code
105
 
106
+ ### Additional Notes
107
 
108
+ Anyone deeply engaged with the Hugging Face NLP course might attempt to create this dataset. While it's accessible remotely as described in the course, this specific version focuses solely on supporting semantic search applications. Other uses may require a dataset with broader field coverage.
109
+ This README is designed to be clear and informative, providing all necessary details about the dataset in a structured manner.