amannagrawall002 commited on
Commit
f9c6285
·
verified ·
1 Parent(s): 90afaa3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -12
README.md CHANGED
@@ -32,6 +32,8 @@ size_categories:
32
  ## Title: GitHub Issues Dataset for Semantic Search
33
  ---
34
 
 
 
35
  ## Dataset Summary
36
 
37
  This dataset was created following the last two sections of Chapter 5 from the Hugging Face NLP course. The course outlined steps to create a dataset for building a semantic search system using GitHub issues from the Hugging Face `datasets` repository. However, during the creation process, several challenges were encountered, including handling null values and timestamp-related errors. This dataset was refined by focusing on fields relevant to semantic search, such as `html_url`, `title`, `body`, `comments`, and `issue number`.
@@ -63,23 +65,19 @@ features: ['html_url', 'title', 'comments', 'body', 'number'],
63
  num_rows: 2893
64
  })
65
 
66
-
67
-
68
-
69
  ## Data Instances
70
 
71
  An example data instance looks like this:
72
 
73
- ```json
74
  {
75
  "html_url": "https://github.com/huggingface/datasets/issues/7079",
76
  "title": "HfHubHTTPError: 500 Server Error: Internal Server Error for url:",
77
  "comments": ["same issue here. ... list of all comments csv"],
78
  "body": "### Describe the bug\n\nnewly uploaded datasets, since yesterday, yields an error.\r\n\r\n...body describing the issue",
79
  "number": 7079
80
- }
81
 
82
- Data Fields
83
  The dataset includes the following fields:
84
 
85
  html_url: URL of the GitHub issue (string).
@@ -89,18 +87,22 @@ An example data instance looks like this:
89
  number: GitHub issue number (int64).
90
 
91
 
92
- Source Data
 
 
 
 
 
 
 
93
  The dataset is crafted from scratch using the GitHub REST API, focusing on open issues from the Hugging Face datasets repository.
94
 
95
- Initial Data Collection and Normalization
96
  The creation of this dataset involved handling over 5000 issues, which exceeds the GitHub REST API's rate limit of 5000 requests per hour. Additionally, extracting comments required significant computational resources, highlighting the involvement of both automated processes and manual oversight.
97
 
98
- Considerations
99
  This dataset is tailored for creating a semantic search application centered around GitHub issues. It does not contain data on pull requests, which may limit its applicability for tasks requiring such information.
100
 
101
- vbnet
102
- Copy code
103
-
104
  ### Additional Notes
105
 
106
  Anyone deeply engaged with the Hugging Face NLP course might attempt to create this dataset. While it's accessible remotely as described in the course, this specific version focuses solely on supporting semantic search applications. Other uses may require a dataset with broader field coverage.
 
32
  ## Title: GitHub Issues Dataset for Semantic Search
33
  ---
34
 
35
+ The preview of the dataset was not available till the time I was uploading the dataset , but you can use it remotely via using the namespace/github-issues using load_dataset function , it's working fine.
36
+
37
  ## Dataset Summary
38
 
39
  This dataset was created following the last two sections of Chapter 5 from the Hugging Face NLP course. The course outlined steps to create a dataset for building a semantic search system using GitHub issues from the Hugging Face `datasets` repository. However, during the creation process, several challenges were encountered, including handling null values and timestamp-related errors. This dataset was refined by focusing on fields relevant to semantic search, such as `html_url`, `title`, `body`, `comments`, and `issue number`.
 
65
  num_rows: 2893
66
  })
67
 
 
 
 
68
  ## Data Instances
69
 
70
  An example data instance looks like this:
71
 
 
72
  {
73
  "html_url": "https://github.com/huggingface/datasets/issues/7079",
74
  "title": "HfHubHTTPError: 500 Server Error: Internal Server Error for url:",
75
  "comments": ["same issue here. ... list of all comments csv"],
76
  "body": "### Describe the bug\n\nnewly uploaded datasets, since yesterday, yields an error.\r\n\r\n...body describing the issue",
77
  "number": 7079
78
+ }
79
 
80
+ ## Data Fields
81
  The dataset includes the following fields:
82
 
83
  html_url: URL of the GitHub issue (string).
 
87
  number: GitHub issue number (int64).
88
 
89
 
90
+ To use this data in an environment where transformers are installed , the data can be imported with 2 lines of code -
91
+
92
+ from datasets import load_dataset
93
+
94
+ ds = load_dataset("amannagrawall002/github-issues")
95
+
96
+
97
+ ## Source Data
98
  The dataset is crafted from scratch using the GitHub REST API, focusing on open issues from the Hugging Face datasets repository.
99
 
100
+ ## Initial Data Collection and Normalization
101
  The creation of this dataset involved handling over 5000 issues, which exceeds the GitHub REST API's rate limit of 5000 requests per hour. Additionally, extracting comments required significant computational resources, highlighting the involvement of both automated processes and manual oversight.
102
 
103
+ ## Considerations
104
  This dataset is tailored for creating a semantic search application centered around GitHub issues. It does not contain data on pull requests, which may limit its applicability for tasks requiring such information.
105
 
 
 
 
106
  ### Additional Notes
107
 
108
  Anyone deeply engaged with the Hugging Face NLP course might attempt to create this dataset. While it's accessible remotely as described in the course, this specific version focuses solely on supporting semantic search applications. Other uses may require a dataset with broader field coverage.