url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
2.12B
node_id
stringlengths
18
32
number
int64
1
6.65k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
int64
0
70
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
βŒ€
pull_request
dict
body
stringlengths
0
228k
βŒ€
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/2758
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2758/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2758/comments
https://api.github.com/repos/huggingface/datasets/issues/2758/events
https://github.com/huggingface/datasets/pull/2758
960,206,575
MDExOlB1bGxSZXF1ZXN0NzAzMjQ5Nzky
2,758
Raise ManualDownloadError when loading a dataset that requires previous manual download
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-08-04T10:19:55Z"
"2021-08-04T11:36:30Z"
"2021-08-04T11:36:30Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2758.diff", "html_url": "https://github.com/huggingface/datasets/pull/2758", "merged_at": "2021-08-04T11:36:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/2758.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2758" }
This PR implements the raising of a `ManualDownloadError` when loading a dataset that requires previous manual download, and this is missing. The `ManualDownloadError` is raised whether the dataset is loaded in normal or streaming mode. Close #2749. cc: @severo
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2758/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2758/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2757
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2757/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2757/comments
https://api.github.com/repos/huggingface/datasets/issues/2757/events
https://github.com/huggingface/datasets/issues/2757
959,984,081
MDU6SXNzdWU5NTk5ODQwODE=
2,757
Unexpected type after `concatenate_datasets`
{ "avatar_url": "https://avatars.githubusercontent.com/u/32683010?v=4", "events_url": "https://api.github.com/users/JulesBelveze/events{/privacy}", "followers_url": "https://api.github.com/users/JulesBelveze/followers", "following_url": "https://api.github.com/users/JulesBelveze/following{/other_user}", "gists_url": "https://api.github.com/users/JulesBelveze/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JulesBelveze", "id": 32683010, "login": "JulesBelveze", "node_id": "MDQ6VXNlcjMyNjgzMDEw", "organizations_url": "https://api.github.com/users/JulesBelveze/orgs", "received_events_url": "https://api.github.com/users/JulesBelveze/received_events", "repos_url": "https://api.github.com/users/JulesBelveze/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JulesBelveze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JulesBelveze/subscriptions", "type": "User", "url": "https://api.github.com/users/JulesBelveze" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
2
"2021-08-04T07:10:39Z"
"2021-08-04T16:01:24Z"
"2021-08-04T16:01:23Z"
NONE
null
null
null
## Describe the bug I am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`. It then leads to a weird tensors when trying to convert it to a `DataLoader`. However, if I use each `Dataset` separately everything behave as expected. ## Steps to reproduce the bug ```python >>> featurized_teacher Dataset({ features: ['t_labels', 't_input_ids', 't_token_type_ids', 't_attention_mask'], num_rows: 502 }) >>> for f in featurized_teacher.features: print(featurized_teacher[f].shape) torch.Size([502]) torch.Size([502, 300]) torch.Size([502, 300]) torch.Size([502, 300]) >>> featurized_student Dataset({ features: ['s_features', 's_labels'], num_rows: 502 }) >>> for f in featurized_student.features: print(featurized_student[f].shape) torch.Size([502, 64]) torch.Size([502]) ``` The shapes seem alright to me. Then the results after concatenation are as follow: ```python >>> concat_dataset = datasets.concatenate_datasets([featurized_student, featurized_teacher], axis=1) >>> type(concat_dataset["t_labels"]) <class 'list'> ``` One would expect to obtain the same type as the one before concatenation. Am I doing something wrong here? Any idea on how to fix this unexpected behavior? ## Environment info - `datasets` version: 1.9.0 - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.9.5 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2757/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2757/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2756
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2756/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2756/comments
https://api.github.com/repos/huggingface/datasets/issues/2756/events
https://github.com/huggingface/datasets/pull/2756
959,255,646
MDExOlB1bGxSZXF1ZXN0NzAyMzk4Mjk1
2,756
Fix metadata JSON for ubuntu_dialogs_corpus dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-08-03T15:48:59Z"
"2021-08-04T09:43:25Z"
"2021-08-04T09:43:25Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2756.diff", "html_url": "https://github.com/huggingface/datasets/pull/2756", "merged_at": "2021-08-04T09:43:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/2756.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2756" }
Related to #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2756/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2756/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2755
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2755/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2755/comments
https://api.github.com/repos/huggingface/datasets/issues/2755/events
https://github.com/huggingface/datasets/pull/2755
959,115,888
MDExOlB1bGxSZXF1ZXN0NzAyMjgwMjI4
2,755
Fix metadata JSON for turkish_movie_sentiment dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-08-03T13:25:44Z"
"2021-08-04T09:06:54Z"
"2021-08-04T09:06:53Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2755.diff", "html_url": "https://github.com/huggingface/datasets/pull/2755", "merged_at": "2021-08-04T09:06:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/2755.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2755" }
Related to #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2755/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2755/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2754
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2754/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2754/comments
https://api.github.com/repos/huggingface/datasets/issues/2754/events
https://github.com/huggingface/datasets/pull/2754
959,105,577
MDExOlB1bGxSZXF1ZXN0NzAyMjcxMjM4
2,754
Generate metadata JSON for telugu_books dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-08-03T13:14:52Z"
"2021-08-04T08:49:02Z"
"2021-08-04T08:49:02Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2754.diff", "html_url": "https://github.com/huggingface/datasets/pull/2754", "merged_at": "2021-08-04T08:49:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/2754.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2754" }
Related to #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2754/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2754/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2753
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2753/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2753/comments
https://api.github.com/repos/huggingface/datasets/issues/2753/events
https://github.com/huggingface/datasets/pull/2753
959,036,995
MDExOlB1bGxSZXF1ZXN0NzAyMjEyMjMz
2,753
Generate metadata JSON for reclor dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-08-03T11:52:29Z"
"2021-08-04T08:07:15Z"
"2021-08-04T08:07:15Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2753.diff", "html_url": "https://github.com/huggingface/datasets/pull/2753", "merged_at": "2021-08-04T08:07:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/2753.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2753" }
Related to #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2753/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2753/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2752
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2752/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2752/comments
https://api.github.com/repos/huggingface/datasets/issues/2752/events
https://github.com/huggingface/datasets/pull/2752
959,023,608
MDExOlB1bGxSZXF1ZXN0NzAyMjAxMjAy
2,752
Generate metadata JSON for lm1b dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-08-03T11:34:56Z"
"2021-08-04T06:40:40Z"
"2021-08-04T06:40:39Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2752.diff", "html_url": "https://github.com/huggingface/datasets/pull/2752", "merged_at": "2021-08-04T06:40:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/2752.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2752" }
Related to #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2752/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2752/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2751
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2751/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2751/comments
https://api.github.com/repos/huggingface/datasets/issues/2751/events
https://github.com/huggingface/datasets/pull/2751
959,021,262
MDExOlB1bGxSZXF1ZXN0NzAyMTk5MjA5
2,751
Update metadata for wikihow dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-08-03T11:31:57Z"
"2021-08-03T15:52:09Z"
"2021-08-03T15:52:09Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2751.diff", "html_url": "https://github.com/huggingface/datasets/pull/2751", "merged_at": "2021-08-03T15:52:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/2751.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2751" }
Update metadata for wikihow dataset: - Remove leading new line character in description and citation - Update metadata JSON - Remove no longer necessary `urls_checksums/checksums.txt` file Related to #2748.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2751/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2751/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2750/comments
https://api.github.com/repos/huggingface/datasets/issues/2750/events
https://github.com/huggingface/datasets/issues/2750
958,984,730
MDU6SXNzdWU5NTg5ODQ3MzA=
2,750
Second concatenation of datasets produces errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4", "events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}", "followers_url": "https://api.github.com/users/Aktsvigun/followers", "following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}", "gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aktsvigun", "id": 36672861, "login": "Aktsvigun", "node_id": "MDQ6VXNlcjM2NjcyODYx", "organizations_url": "https://api.github.com/users/Aktsvigun/orgs", "received_events_url": "https://api.github.com/users/Aktsvigun/received_events", "repos_url": "https://api.github.com/users/Aktsvigun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions", "type": "User", "url": "https://api.github.com/users/Aktsvigun" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
5
"2021-08-03T10:47:04Z"
"2022-01-19T14:23:43Z"
"2022-01-19T14:19:05Z"
NONE
null
null
null
Hi, I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`. ``` from datasets import load_dataset, concatenate_datasets data = load_dataset('trec')['train'] concatenated = concatenate_datasets([data, data]) concatenated_2 = concatenate_datasets([concatenated, concatenated]) print('True features of features:', concatenated.features) print('\nProduced features of features:', concatenated_2.features) ``` outputs ``` True features of features: {'label-coarse': ClassLabel(num_classes=6, names=['DESC', 'ENTY', 'ABBR', 'HUM', 'NUM', 'LOC'], names_file=None, id=None), 'label-fine': ClassLabel(num_classes=47, names=['manner', 'cremat', 'animal', 'exp', 'ind', 'gr', 'title', 'def', 'date', 'reason', 'event', 'state', 'desc', 'count', 'other', 'letter', 'religion', 'food', 'country', 'color', 'termeq', 'city', 'body', 'dismed', 'mount', 'money', 'product', 'period', 'substance', 'sport', 'plant', 'techmeth', 'volsize', 'instru', 'abb', 'speed', 'word', 'lang', 'perc', 'code', 'dist', 'temp', 'symbol', 'ord', 'veh', 'weight', 'currency'], names_file=None, id=None), 'text': Value(dtype='string', id=None)} Produced features of features: {'label-coarse': Value(dtype='int64', id=None), 'label-fine': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)} ``` I am using `datasets` v.1.11.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2750/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2750/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2749/comments
https://api.github.com/repos/huggingface/datasets/issues/2749/events
https://github.com/huggingface/datasets/issues/2749
958,968,748
MDU6SXNzdWU5NTg5Njg3NDg=
2,749
Raise a proper exception when trying to stream a dataset that requires to manually download files
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
2
"2021-08-03T10:26:27Z"
"2021-08-09T08:53:35Z"
"2021-08-04T11:36:30Z"
CONTRIBUTOR
null
null
null
## Describe the bug At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("reclor", streaming=True) ``` ## Expected results Ideally: raise a specific exception, something like `ManualDownloadError`. Or at least give the reason in the message, as when we load in normal mode: ```python from datasets import load_dataset dataset = load_dataset("reclor") ``` ``` AssertionError: The dataset reclor with config default requires manual data. Please follow the manual download instructions: to use ReClor you need to download it manually. Please go to its homepage (http://whyu.me/reclor/) fill the google form and you will receive a download link and a password to extract it.Please extract all files in one folder and use the path folder in datasets.load_dataset('reclor', data_dir='path/to/folder/folder_name') . Manual data can be loaded with `datasets.load_dataset(reclor, data_dir='<path/to/manual/data>') ``` ## Actual results ``` TypeError: expected str, bytes or os.PathLike object, not NoneType ``` ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-11.5-x86_64-i386-64bit - Python version: 3.8.11 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2749/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2749/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2748/comments
https://api.github.com/repos/huggingface/datasets/issues/2748/events
https://github.com/huggingface/datasets/pull/2748
958,889,041
MDExOlB1bGxSZXF1ZXN0NzAyMDg4NTk4
2,748
Generate metadata JSON for wikihow dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-08-03T08:55:40Z"
"2021-08-03T10:17:51Z"
"2021-08-03T10:17:51Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2748.diff", "html_url": "https://github.com/huggingface/datasets/pull/2748", "merged_at": "2021-08-03T10:17:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/2748.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2748" }
Related to #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2748/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2748/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2747
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2747/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2747/comments
https://api.github.com/repos/huggingface/datasets/issues/2747/events
https://github.com/huggingface/datasets/pull/2747
958,867,627
MDExOlB1bGxSZXF1ZXN0NzAyMDcwOTgy
2,747
add multi-proc in `to_json`
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
[]
closed
false
null
[]
null
17
"2021-08-03T08:30:13Z"
"2021-10-19T18:24:21Z"
"2021-09-13T13:56:37Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2747.diff", "html_url": "https://github.com/huggingface/datasets/pull/2747", "merged_at": "2021-09-13T13:56:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/2747.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2747" }
Closes #2663. I've tried adding multiprocessing in `to_json`. Here's some benchmarking I did to compare the timings of current version (say v1) and multi-proc version (say v2). I did this with `cpu_count` 4 (2015 Macbook Air) 1. Dataset name: `ascent_kb` - 8.9M samples (all samples were used, reporting this for a single run) v1- ~225 seconds for converting whole dataset to json v2- ~200 seconds for converting whole dataset to json 2. Dataset name: `lama` - 1.3M samples (all samples were used, reporting this for 2 runs) v1- ~26 seconds for converting whole dataset to json v2- ~23.6 seconds for converting whole dataset to json I think it's safe to say that v2 is 10% faster as compared to v1. Timings may improve further with better configuration. The only bottleneck I feel is writing to file from the output list. If we can improve that aspect then timings may improve further. Let me know if any changes/improvements can be done in this @stas00, @lhoestq, @albertvillanova. @lhoestq even suggested to extend this work with other export methods as well like `csv` or `parquet`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2747/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2747/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2746
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2746/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2746/comments
https://api.github.com/repos/huggingface/datasets/issues/2746/events
https://github.com/huggingface/datasets/issues/2746
958,551,619
MDU6SXNzdWU5NTg1NTE2MTk=
2,746
Cannot load `few-nerd` dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4", "events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}", "followers_url": "https://api.github.com/users/Mehrad0711/followers", "following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}", "gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehrad0711", "id": 28717374, "login": "Mehrad0711", "node_id": "MDQ6VXNlcjI4NzE3Mzc0", "organizations_url": "https://api.github.com/users/Mehrad0711/orgs", "received_events_url": "https://api.github.com/users/Mehrad0711/received_events", "repos_url": "https://api.github.com/users/Mehrad0711/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehrad0711" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
6
"2021-08-02T22:18:57Z"
"2021-11-16T08:51:34Z"
"2021-08-03T19:45:43Z"
NONE
null
null
null
## Describe the bug Cannot load `few-nerd` dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('few-nerd', 'supervised') ``` ## Actual results Executing above code will give the following error: ``` Using the latest cached version of the module from /Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53 (last modified on Wed Jun 2 11:34:25 2021) since it couldn't be found locally at /Users/Mehrad/Documents/GitHub/genienlp/few-nerd/few-nerd.py, or remotely (FileNotFoundError). Downloading and preparing dataset few_nerd/supervised (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/Mehrad/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53... Traceback (most recent call last): File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1107, in _prepare_split disable=bool(logging.get_verbosity() == logging.NOTSET), File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53/few-nerd.py", line 196, in _generate_examples with open(filepath, encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: '/Users/Mehrad/.cache/huggingface/datasets/downloads/supervised/train.json' ``` The bug is probably in identifying and downloading the dataset. If I download the json splits directly from [link](https://github.com/nbroad1881/few-nerd/tree/main/uncompressed) and put them under the downloads directory, they will be processed into arrow format correctly. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Python version: 3.8 - PyArrow version: 1.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2746/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2746/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2745
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2745/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2745/comments
https://api.github.com/repos/huggingface/datasets/issues/2745/events
https://github.com/huggingface/datasets/pull/2745
958,269,579
MDExOlB1bGxSZXF1ZXN0NzAxNTc0Mjcz
2,745
added semeval18_emotion_classification dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/31095360?v=4", "events_url": "https://api.github.com/users/maxpel/events{/privacy}", "followers_url": "https://api.github.com/users/maxpel/followers", "following_url": "https://api.github.com/users/maxpel/following{/other_user}", "gists_url": "https://api.github.com/users/maxpel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maxpel", "id": 31095360, "login": "maxpel", "node_id": "MDQ6VXNlcjMxMDk1MzYw", "organizations_url": "https://api.github.com/users/maxpel/orgs", "received_events_url": "https://api.github.com/users/maxpel/received_events", "repos_url": "https://api.github.com/users/maxpel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maxpel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maxpel/subscriptions", "type": "User", "url": "https://api.github.com/users/maxpel" }
[]
closed
false
null
[]
null
7
"2021-08-02T15:39:55Z"
"2021-10-29T09:22:05Z"
"2021-09-21T09:48:35Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2745.diff", "html_url": "https://github.com/huggingface/datasets/pull/2745", "merged_at": "2021-09-21T09:48:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/2745.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2745" }
I added the data set of SemEval 2018 Task 1 (Subtask 5) for emotion detection in three languages. ``` datasets-cli test datasets/semeval18_emotion_classification/ --save_infos --all_configs RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_semeval18_emotion_classification ``` Both commands ran successfully. I couldn't create the dummy data (the files are tsvs but have .txt ending, maybe that's the problem?) and therefore the test on the dummy data fails, maybe someone can help here. I also formatted the code: ``` black --line-length 119 --target-version py36 datasets/semeval18_emotion_classification/ isort datasets/semeval18_emotion_classification/ flake8 datasets/semeval18_emotion_classification/ ``` That's the publication for reference: Mohammad, S., Bravo-Marquez, F., Salameh, M., & Kiritchenko, S. (2018). SemEval-2018 task 1: Affect in tweets. Proceedings of the 12th International Workshop on Semantic Evaluation, 1–17. https://doi.org/10.18653/v1/S18-1001
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2745/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2745/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2744
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2744/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2744/comments
https://api.github.com/repos/huggingface/datasets/issues/2744/events
https://github.com/huggingface/datasets/pull/2744
958,146,637
MDExOlB1bGxSZXF1ZXN0NzAxNDY4NDcz
2,744
Fix key by recreating metadata JSON for journalists_questions dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-08-02T13:27:53Z"
"2021-08-03T09:25:34Z"
"2021-08-03T09:25:33Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2744.diff", "html_url": "https://github.com/huggingface/datasets/pull/2744", "merged_at": "2021-08-03T09:25:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/2744.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2744" }
Close #2743.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2744/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2744/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2743
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2743/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2743/comments
https://api.github.com/repos/huggingface/datasets/issues/2743/events
https://github.com/huggingface/datasets/issues/2743
958,119,251
MDU6SXNzdWU5NTgxMTkyNTE=
2,743
Dataset JSON is incorrect
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
2
"2021-08-02T13:01:26Z"
"2021-08-03T10:06:57Z"
"2021-08-03T09:25:33Z"
CONTRIBUTOR
null
null
null
## Describe the bug The JSON file generated for https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/journalists_questions.py is https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/dataset_infos.json. The only config should be `plain_text`, but the first key in the JSON is `journalists_questions` (the dataset id) instead. ```json { "journalists_questions": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ``` ## Steps to reproduce the bug Look at the files. ## Expected results The first key should be `plain_text`: ```json { "plain_text": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ``` ## Actual results ```json { "journalists_questions": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2743/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2743/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2742
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2742/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2742/comments
https://api.github.com/repos/huggingface/datasets/issues/2742/events
https://github.com/huggingface/datasets/issues/2742
958,114,064
MDU6SXNzdWU5NTgxMTQwNjQ=
2,742
Improve detection of streamable file types
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
1
"2021-08-02T12:55:09Z"
"2021-11-12T17:18:10Z"
"2021-11-12T17:18:10Z"
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** ```python from datasets import load_dataset_builder from datasets.utils.streaming_download_manager import StreamingDownloadManager builder = load_dataset_builder("journalists_questions", name="plain_text") builder._split_generators(StreamingDownloadManager(base_path=builder.base_path)) ``` raises ``` NotImplementedError: Extraction protocol for file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is not implemented yet ``` But the file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is a text file and it can be streamed: ```bash curl --header "Range: bytes=0-100" -L https://drive.google.com/uc\?export\=download\&id\=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U 506938088174940160 yes 1 302221719412830209 yes 1 289761704907268096 yes 1 513820885032378369 yes % ``` Yet, it's wrongly categorized as a file type that cannot be streamed because the test is currently based on 1. the presence of a file extension at the end of the URL (here: no extension), and 2. the inclusion of this extension in a list of supported formats. **Describe the solution you'd like** In the case of an URL (instead of a local path), ask for the MIME type, and decide on that value? Note that it would not work in that case, because the value of `content_type` is `text/html; charset=UTF-8`. **Describe alternatives you've considered** Add a variable in the dataset script to set the data format by hand.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2742/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2742/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2741/comments
https://api.github.com/repos/huggingface/datasets/issues/2741/events
https://github.com/huggingface/datasets/issues/2741
957,979,559
MDU6SXNzdWU5NTc5Nzk1NTk=
2,741
Add Hypersim dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
open
false
null
[]
null
0
"2021-08-02T10:06:50Z"
"2021-12-08T12:06:51Z"
null
MEMBER
null
null
null
## Adding a Dataset - **Name:** Hypersim - **Description:** photorealistic synthetic dataset for holistic indoor scene understanding - **Paper:** *link to the dataset paper if available* - **Data:** https://github.com/apple/ml-hypersim Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2741/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2741/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2740
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2740/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2740/comments
https://api.github.com/repos/huggingface/datasets/issues/2740/events
https://github.com/huggingface/datasets/pull/2740
957,911,035
MDExOlB1bGxSZXF1ZXN0NzAxMjY0NTI3
2,740
Update release instructions
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-08-02T08:46:00Z"
"2021-08-02T14:39:56Z"
"2021-08-02T14:39:56Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2740.diff", "html_url": "https://github.com/huggingface/datasets/pull/2740", "merged_at": "2021-08-02T14:39:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/2740.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2740" }
Update release instructions.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2740/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2740/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2739
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2739/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2739/comments
https://api.github.com/repos/huggingface/datasets/issues/2739/events
https://github.com/huggingface/datasets/pull/2739
957,751,260
MDExOlB1bGxSZXF1ZXN0NzAxMTI0ODQ3
2,739
Pass tokenize to sacrebleu only if explicitly passed by user
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-08-02T05:09:05Z"
"2021-08-03T04:23:37Z"
"2021-08-03T04:23:37Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2739.diff", "html_url": "https://github.com/huggingface/datasets/pull/2739", "merged_at": "2021-08-03T04:23:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/2739.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2739" }
Next `sacrebleu` release (v2.0.0) will remove `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR passes `tokenize` to `sacrebleu` only if explicitly passed by the user, otherwise it will not pass it (and `sacrebleu` will use its default, no matter where it is and how it is called). Close: #2737.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2739/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2739/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2738
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2738/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2738/comments
https://api.github.com/repos/huggingface/datasets/issues/2738/events
https://github.com/huggingface/datasets/pull/2738
957,517,746
MDExOlB1bGxSZXF1ZXN0NzAwOTI5NzA4
2,738
Sunbird AI Ugandan low resource language dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/12105163?v=4", "events_url": "https://api.github.com/users/ak3ra/events{/privacy}", "followers_url": "https://api.github.com/users/ak3ra/followers", "following_url": "https://api.github.com/users/ak3ra/following{/other_user}", "gists_url": "https://api.github.com/users/ak3ra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ak3ra", "id": 12105163, "login": "ak3ra", "node_id": "MDQ6VXNlcjEyMTA1MTYz", "organizations_url": "https://api.github.com/users/ak3ra/orgs", "received_events_url": "https://api.github.com/users/ak3ra/received_events", "repos_url": "https://api.github.com/users/ak3ra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ak3ra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ak3ra/subscriptions", "type": "User", "url": "https://api.github.com/users/ak3ra" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
4
"2021-08-01T15:18:00Z"
"2022-10-03T09:37:30Z"
"2022-10-03T09:37:30Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2738.diff", "html_url": "https://github.com/huggingface/datasets/pull/2738", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2738.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2738" }
Multi-way parallel text corpus of 5 key Ugandan languages for the task of machine translation.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2738/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2738/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2737
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2737/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2737/comments
https://api.github.com/repos/huggingface/datasets/issues/2737/events
https://github.com/huggingface/datasets/issues/2737
957,124,881
MDU6SXNzdWU5NTcxMjQ4ODE=
2,737
SacreBLEU update
{ "avatar_url": "https://avatars.githubusercontent.com/u/46989091?v=4", "events_url": "https://api.github.com/users/devrimcavusoglu/events{/privacy}", "followers_url": "https://api.github.com/users/devrimcavusoglu/followers", "following_url": "https://api.github.com/users/devrimcavusoglu/following{/other_user}", "gists_url": "https://api.github.com/users/devrimcavusoglu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/devrimcavusoglu", "id": 46989091, "login": "devrimcavusoglu", "node_id": "MDQ6VXNlcjQ2OTg5MDkx", "organizations_url": "https://api.github.com/users/devrimcavusoglu/orgs", "received_events_url": "https://api.github.com/users/devrimcavusoglu/received_events", "repos_url": "https://api.github.com/users/devrimcavusoglu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/devrimcavusoglu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/devrimcavusoglu/subscriptions", "type": "User", "url": "https://api.github.com/users/devrimcavusoglu" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
5
"2021-07-30T23:53:08Z"
"2021-09-22T10:47:41Z"
"2021-08-03T04:23:37Z"
NONE
null
null
null
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries to import it anyways. This can be fixed currently with fixing `sacrebleu==1.5.0` ## Steps to reproduce the bug ```python sacrebleu= datasets.load_metric('sacrebleu') predictions = ["It is a guide to action which ensures that the military always obeys the commands of the party"] references = ["It is a guide to action that ensures that the military will forever heed Party commands"] results = sacrebleu.compute(predictions=predictions, references=references) print(results) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: Python 3.8.0 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2737/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2737/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2736/comments
https://api.github.com/repos/huggingface/datasets/issues/2736/events
https://github.com/huggingface/datasets/issues/2736
956,895,199
MDU6SXNzdWU5NTY4OTUxOTk=
2,736
Add Microsoft Building Footprints dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
open
false
null
[]
null
1
"2021-07-30T16:17:08Z"
"2021-12-08T12:09:03Z"
null
MEMBER
null
null
null
## Adding a Dataset - **Name:** Microsoft Building Footprints - **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge. - **Paper:** *link to the dataset paper if available* - **Data:** https://www.microsoft.com/en-us/maps/building-footprints - **Motivation:** this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Reported by: @sashavor
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2736/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2736/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2735/comments
https://api.github.com/repos/huggingface/datasets/issues/2735/events
https://github.com/huggingface/datasets/issues/2735
956,889,365
MDU6SXNzdWU5NTY4ODkzNjU=
2,735
Add Open Buildings dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
0
"2021-07-30T16:08:39Z"
"2021-07-31T05:01:25Z"
null
MEMBER
null
null
null
## Adding a Dataset - **Name:** Open Buildings - **Description:** A dataset of building footprints to support social good applications. Building footprints are useful for a range of important applications, from population estimation, urban planning and humanitarian response, to environmental and climate science. This large-scale open dataset contains the outlines of buildings derived from high-resolution satellite imagery in order to support these types of uses. The project being based in Ghana, the current focus is on the continent of Africa. See: "Mapping Africa's Buildings with Satellite Imagery" https://ai.googleblog.com/2021/07/mapping-africas-buildings-with.html - **Paper:** https://arxiv.org/abs/2107.12283 - **Data:** https://sites.research.google/open-buildings/ - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Reported by: @osanseviero
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2735/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2735/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2734
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2734/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2734/comments
https://api.github.com/repos/huggingface/datasets/issues/2734/events
https://github.com/huggingface/datasets/pull/2734
956,844,874
MDExOlB1bGxSZXF1ZXN0NzAwMzc4NjI4
2,734
Update BibTeX entry
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-07-30T15:22:51Z"
"2021-07-30T15:47:58Z"
"2021-07-30T15:47:58Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2734.diff", "html_url": "https://github.com/huggingface/datasets/pull/2734", "merged_at": "2021-07-30T15:47:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/2734.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2734" }
Update BibTeX entry.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2734/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2734/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2733
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2733/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2733/comments
https://api.github.com/repos/huggingface/datasets/issues/2733/events
https://github.com/huggingface/datasets/pull/2733
956,725,476
MDExOlB1bGxSZXF1ZXN0NzAwMjc1NDMy
2,733
Add missing parquet known extension
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2021-07-30T13:01:20Z"
"2021-07-30T13:24:31Z"
"2021-07-30T13:24:30Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2733.diff", "html_url": "https://github.com/huggingface/datasets/pull/2733", "merged_at": "2021-07-30T13:24:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/2733.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2733" }
This code was failing because the parquet extension wasn't recognized: ```python from datasets import load_dataset base_url = "https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/" data_files = {"train": base_url + "wikipedia-train.parquet"} wiki = load_dataset("parquet", data_files=data_files, split="train", streaming=True) ``` It raises ```python NotImplementedError: Extraction protocol for file at https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/wikipedia-train.parquet is not implemented yet ``` I added `parquet` to the list of known extensions EDIT: added pickle, conllu, xml extensions as well
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2733/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2733/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2732
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2732/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2732/comments
https://api.github.com/repos/huggingface/datasets/issues/2732/events
https://github.com/huggingface/datasets/pull/2732
956,676,360
MDExOlB1bGxSZXF1ZXN0NzAwMjMzMzQy
2,732
Updated TTC4900 Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4", "events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}", "followers_url": "https://api.github.com/users/yavuzKomecoglu/followers", "following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}", "gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yavuzKomecoglu", "id": 5150963, "login": "yavuzKomecoglu", "node_id": "MDQ6VXNlcjUxNTA5NjM=", "organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs", "received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events", "repos_url": "https://api.github.com/users/yavuzKomecoglu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions", "type": "User", "url": "https://api.github.com/users/yavuzKomecoglu" }
[]
closed
false
null
[]
null
2
"2021-07-30T11:52:14Z"
"2021-07-30T16:00:51Z"
"2021-07-30T15:58:14Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2732.diff", "html_url": "https://github.com/huggingface/datasets/pull/2732", "merged_at": "2021-07-30T15:58:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/2732.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2732" }
- The source address of the TTC4900 dataset of [@savasy](https://github.com/savasy) has been updated for direct download. - Updated readme.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2732/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2732/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2731/comments
https://api.github.com/repos/huggingface/datasets/issues/2731/events
https://github.com/huggingface/datasets/pull/2731
956,087,452
MDExOlB1bGxSZXF1ZXN0Njk5NzQwMjg5
2,731
Adding to_tf_dataset method
{ "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rocketknight1", "id": 12866554, "login": "Rocketknight1", "node_id": "MDQ6VXNlcjEyODY2NTU0", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "type": "User", "url": "https://api.github.com/users/Rocketknight1" }
[]
closed
false
null
[]
null
7
"2021-07-29T18:10:25Z"
"2021-09-16T13:50:54Z"
"2021-09-16T13:50:54Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2731.diff", "html_url": "https://github.com/huggingface/datasets/pull/2731", "merged_at": "2021-09-16T13:50:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/2731.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2731" }
Oh my **god** do not merge this yet, it's just a draft. I've added a method (via a mixin) to the `arrow_dataset.Dataset` class that automatically converts our Dataset classes to TF Dataset classes ready for training. It hopefully has most of the features we want, including streaming from disk (no need to load the whole dataset in memory!), correct shuffling, variable-length batches to reduce compute, and correct support for unusual padding. It achieves that by calling the tokenizer `pad` method in the middle of a TF compute graph via a very hacky call to `tf.py_function`, which is heretical but seems to work. A number of issues need to be resolved before it's ready to merge, though: 1) Is a MixIn the right way to do this? Do other classes besides `arrow_dataset.Dataset` need this method too? 2) Needs an argument to support constant-length batches for TPU training - this is easy to add and I'll do it soon. 3) Needs the user to supply the list of columns to drop from the arrow `Dataset`. Is there some automatic way to get the columns we want, or see which columns were added by the tokenizer? 4) Assumes the label column is always present and always called "label" - this is probably not great, but I'm not sure what the 'correct' thing to do here is.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2731/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2731/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2730
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2730/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2730/comments
https://api.github.com/repos/huggingface/datasets/issues/2730/events
https://github.com/huggingface/datasets/issues/2730
955,987,834
MDU6SXNzdWU5NTU5ODc4MzQ=
2,730
Update CommonVoice with new release
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
3
"2021-07-29T15:59:59Z"
"2021-08-07T16:19:19Z"
null
MEMBER
null
null
null
## Adding a Dataset - **Name:** CommonVoice mid-2021 release - **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth, from 100 to 840), and Tamil (almost 8x, from 24 to 220). - **Paper:** https://discourse.mozilla.org/t/common-voice-2021-mid-year-dataset-release/83812 - **Data:** https://commonvoice.mozilla.org/en/datasets - **Motivation:** More data and more varied. I think we just need to add configs in the existing dataset script. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2730/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2730/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2729
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2729/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2729/comments
https://api.github.com/repos/huggingface/datasets/issues/2729/events
https://github.com/huggingface/datasets/pull/2729
955,920,489
MDExOlB1bGxSZXF1ZXN0Njk5NTk5MjA4
2,729
Fix IndexError while loading Arabic Billion Words dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
0
"2021-07-29T14:47:02Z"
"2021-07-30T13:03:55Z"
"2021-07-30T13:03:55Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2729.diff", "html_url": "https://github.com/huggingface/datasets/pull/2729", "merged_at": "2021-07-30T13:03:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/2729.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2729" }
Catch `IndexError` and ignore that record. Close #2727.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2729/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2729/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2728/comments
https://api.github.com/repos/huggingface/datasets/issues/2728/events
https://github.com/huggingface/datasets/issues/2728
955,892,970
MDU6SXNzdWU5NTU4OTI5NzA=
2,728
Concurrent use of same dataset (already downloaded)
{ "avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4", "events_url": "https://api.github.com/users/PierreColombo/events{/privacy}", "followers_url": "https://api.github.com/users/PierreColombo/followers", "following_url": "https://api.github.com/users/PierreColombo/following{/other_user}", "gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PierreColombo", "id": 22492839, "login": "PierreColombo", "node_id": "MDQ6VXNlcjIyNDkyODM5", "organizations_url": "https://api.github.com/users/PierreColombo/orgs", "received_events_url": "https://api.github.com/users/PierreColombo/received_events", "repos_url": "https://api.github.com/users/PierreColombo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions", "type": "User", "url": "https://api.github.com/users/PierreColombo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
4
"2021-07-29T14:18:38Z"
"2021-08-02T07:25:57Z"
null
CONTRIBUTOR
null
null
null
## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" "bert-large-cased" "roberta-large" "albert-base-v1" "albert-large-v1"; do for TASK_NAME in "mrpc" "rte" 'imdb' "paws" "mnli"; do export OUTPUT_DIR=${MODEL}_${TASK_NAME} sbatch --job-name=${OUTPUT_DIR} \ --gres=gpu:1 \ --no-requeue \ --cpus-per-task=10 \ --hint=nomultithread \ --time=1:00:00 \ --output=jobinfo/${OUTPUT_DIR}_%j.out \ --error=jobinfo/${OUTPUT_DIR}_%j.err \ --qos=qos_gpu-t4 \ --wrap="module purge; module load pytorch-gpu/py3/1.7.0 ; export HF_DATASETS_OFFLINE=1; export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets; python compute_measures.py --seed=$SEED --saving_path=results --batch_size=$BATCH_SIZE --task_name=$TASK_NAME --model_name=/gpfswork/rech/toto/transformers_models/$MODEL" done done ```python # Sample code to reproduce the bug dataset_train = load_dataset('imdb', split='train', download_mode="reuse_cache_if_exists") dataset_train = dataset_train.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True).select(list(range(args.filter))) dataset_val = load_dataset('imdb', split='train', download_mode="reuse_cache_if_exists") dataset_val = dataset_val.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True).select(list(range(args.filter, args.filter + 5000))) dataset_test = load_dataset('imdb', split='test', download_mode="reuse_cache_if_exists") dataset_test = dataset_test.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True) ``` ## Expected results I believe I am doing something wrong with the objects. ## Actual results Traceback (most recent call last): File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 983, in _prepare_split check_duplicates=True, File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/arrow_writer.py", line 192, in __init__ self.stream = pa.OSFile(self._path, "wb") File "pyarrow/io.pxi", line 829, in pyarrow.lib.OSFile.__cinit__ File "pyarrow/io.pxi", line 844, in pyarrow.lib.OSFile._open_writable File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 97, in pyarrow.lib.check_status FileNotFoundError: [Errno 2] Failed to open local file '/gpfswork/rech/tts/unm25jp/datasets/paws/labeled_final/1.1.0/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete/paws-test.arrow'. Detail: [errno 2] No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File "compute_measures.py", line 181, in <module> train_loader, val_loader, test_loader = get_dataloader(args) File "/gpfsdswork/projects/rech/toto/intRAOcular/dataset_utils.py", line 69, in get_dataloader dataset_train = load_dataset('paws', "labeled_final", split='train', download_mode="reuse_cache_if_exists") File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 658, in _download_and_prepare + str(e) OSError: Cannot find data file. Original error: [Errno 2] Failed to open local file '/gpfswork/rech/toto/datasets/paws/labeled_final/1.1.0/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete/paws-test.arrow'. Detail: [errno 2] No such file or directory ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets==1.8.0 - Platform: linux (jeanzay) - Python version: pyarrow==2.0.0 - PyArrow version: 3.7.8
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2728/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2728/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2727/comments
https://api.github.com/repos/huggingface/datasets/issues/2727/events
https://github.com/huggingface/datasets/issues/2727
955,812,149
MDU6SXNzdWU5NTU4MTIxNDk=
2,727
Error in loading the Arabic Billion Words Corpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/M-Salti", "id": 9285264, "login": "M-Salti", "node_id": "MDQ6VXNlcjkyODUyNjQ=", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "repos_url": "https://api.github.com/users/M-Salti/repos", "site_admin": false, "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "type": "User", "url": "https://api.github.com/users/M-Salti" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
2
"2021-07-29T12:53:09Z"
"2021-07-30T13:03:55Z"
"2021-07-30T13:03:55Z"
CONTRIBUTOR
null
null
null
## Describe the bug I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset. ## Steps to reproduce the bug ```python load_dataset("arabic_billion_words", "Techreen") load_dataset("arabic_billion_words", "Almustaqbal") ``` ## Expected results The datasets load succefully. ## Actual results ```python _extract_tags(self, sample, tag) 139 if len(out) > 0: 140 break --> 141 return out[0] 142 143 def _clean_text(self, text): IndexError: list index out of range ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.2 - Platform: Ubuntu 18.04.5 LTS - Python version: 3.7.11 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2727/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2727/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2726/comments
https://api.github.com/repos/huggingface/datasets/issues/2726/events
https://github.com/huggingface/datasets/pull/2726
955,674,388
MDExOlB1bGxSZXF1ZXN0Njk5Mzg5MDk1
2,726
Typo fix `tokenize_exemple`
{ "avatar_url": "https://avatars.githubusercontent.com/u/30535146?v=4", "events_url": "https://api.github.com/users/shabie/events{/privacy}", "followers_url": "https://api.github.com/users/shabie/followers", "following_url": "https://api.github.com/users/shabie/following{/other_user}", "gists_url": "https://api.github.com/users/shabie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shabie", "id": 30535146, "login": "shabie", "node_id": "MDQ6VXNlcjMwNTM1MTQ2", "organizations_url": "https://api.github.com/users/shabie/orgs", "received_events_url": "https://api.github.com/users/shabie/received_events", "repos_url": "https://api.github.com/users/shabie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shabie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shabie/subscriptions", "type": "User", "url": "https://api.github.com/users/shabie" }
[]
closed
false
null
[]
null
0
"2021-07-29T10:03:37Z"
"2021-07-29T12:00:25Z"
"2021-07-29T12:00:25Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2726.diff", "html_url": "https://github.com/huggingface/datasets/pull/2726", "merged_at": "2021-07-29T12:00:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/2726.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2726" }
There is a small typo in the main README.md
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2726/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2726/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2725
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2725/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2725/comments
https://api.github.com/repos/huggingface/datasets/issues/2725/events
https://github.com/huggingface/datasets/pull/2725
955,020,776
MDExOlB1bGxSZXF1ZXN0Njk4ODMwNjYw
2,725
Pass use_auth_token to request_etags
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-07-28T16:13:29Z"
"2021-07-28T16:38:02Z"
"2021-07-28T16:38:02Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2725.diff", "html_url": "https://github.com/huggingface/datasets/pull/2725", "merged_at": "2021-07-28T16:38:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/2725.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2725" }
Fix #2724.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2725/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2725/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2724/comments
https://api.github.com/repos/huggingface/datasets/issues/2724/events
https://github.com/huggingface/datasets/issues/2724
954,919,607
MDU6SXNzdWU5NTQ5MTk2MDc=
2,724
404 Error when loading remote data files from private repo
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
3
"2021-07-28T14:24:23Z"
"2021-07-29T04:58:49Z"
"2021-07-28T16:38:01Z"
MEMBER
null
null
null
## Describe the bug When loading remote data files from a private repo, a 404 error is raised. ## Steps to reproduce the bug ```python url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset") dset = load_dataset("json", data_files=url, use_auth_token=True) # HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/datasets/lewtun/asr-preds-test/resolve/main/preds.jsonl ``` ## Expected results Load dataset. ## Actual results 404 Error.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2724/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2724/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2723
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2723/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2723/comments
https://api.github.com/repos/huggingface/datasets/issues/2723/events
https://github.com/huggingface/datasets/pull/2723
954,864,104
MDExOlB1bGxSZXF1ZXN0Njk4Njk0NDMw
2,723
Fix en subset by modifying dataset_info with correct validation infos
{ "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomasw21", "id": 24695242, "login": "thomasw21", "node_id": "MDQ6VXNlcjI0Njk1MjQy", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "repos_url": "https://api.github.com/users/thomasw21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "type": "User", "url": "https://api.github.com/users/thomasw21" }
[]
closed
false
null
[]
null
0
"2021-07-28T13:36:19Z"
"2021-07-28T15:22:23Z"
"2021-07-28T15:22:23Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2723.diff", "html_url": "https://github.com/huggingface/datasets/pull/2723", "merged_at": "2021-07-28T15:22:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/2723.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2723" }
- Related to: #2682 We correct the values of `en` subset concerning the expected validation values (both `num_bytes` and `num_examples`. Instead of having: `{"name": "validation", "num_bytes": 828589180707, "num_examples": 364868892, "dataset_name": "c4"}` We replace with correct values: `{"name": "validation", "num_bytes": 825767266, "num_examples": 364608, "dataset_name": "c4"}` There are still issues with validation with other subsets, but I can't download all the files, unzip to check for the correct number of bytes. (If you have a fast way to obtain those values for other subsets, I can do this in this PR ... otherwise I can't spend those resources)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2723/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2723/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2722/comments
https://api.github.com/repos/huggingface/datasets/issues/2722/events
https://github.com/huggingface/datasets/issues/2722
954,446,053
MDU6SXNzdWU5NTQ0NDYwNTM=
2,722
Missing cache file
{ "avatar_url": "https://avatars.githubusercontent.com/u/33200481?v=4", "events_url": "https://api.github.com/users/PosoSAgapo/events{/privacy}", "followers_url": "https://api.github.com/users/PosoSAgapo/followers", "following_url": "https://api.github.com/users/PosoSAgapo/following{/other_user}", "gists_url": "https://api.github.com/users/PosoSAgapo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PosoSAgapo", "id": 33200481, "login": "PosoSAgapo", "node_id": "MDQ6VXNlcjMzMjAwNDgx", "organizations_url": "https://api.github.com/users/PosoSAgapo/orgs", "received_events_url": "https://api.github.com/users/PosoSAgapo/received_events", "repos_url": "https://api.github.com/users/PosoSAgapo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PosoSAgapo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PosoSAgapo/subscriptions", "type": "User", "url": "https://api.github.com/users/PosoSAgapo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
2
"2021-07-28T03:52:07Z"
"2022-03-21T08:27:51Z"
"2022-03-21T08:27:51Z"
NONE
null
null
null
Strangely missing cache file after I restart my program again. `glue_dataset = datasets.load_dataset('glue', 'sst2')` `FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad/dataset_info.json'`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2722/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2722/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2721
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2721/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2721/comments
https://api.github.com/repos/huggingface/datasets/issues/2721/events
https://github.com/huggingface/datasets/pull/2721
954,238,230
MDExOlB1bGxSZXF1ZXN0Njk4MTY0Njg3
2,721
Deal with the bad check in test_load.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
1
"2021-07-27T20:23:23Z"
"2021-07-28T09:58:34Z"
"2021-07-28T08:53:18Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2721.diff", "html_url": "https://github.com/huggingface/datasets/pull/2721", "merged_at": "2021-07-28T08:53:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/2721.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2721" }
This PR removes a check that's been added in #2684. My intention with this check was to capture an URL in the error message, but instead, it captures a substring of the previous regex match in the test function. Another option would be to replace this check with: ```python m_paths = re.findall(r"\S*_dummy/_dummy.py\b", str(exc_info.value)) # on Linux this will match an URL as well as a local_path due to different os.sep, so take the last element (an URL always comes last in the list) assert len(m_paths) > 0 and is_remote_url(m_paths[-1]) # is_remote_url comes from datasets.utils.file_utils ``` @lhoestq Let me know which one of these two approaches (delete or replace) do you prefer?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2721/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2721/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2720/comments
https://api.github.com/repos/huggingface/datasets/issues/2720/events
https://github.com/huggingface/datasets/pull/2720
954,024,426
MDExOlB1bGxSZXF1ZXN0Njk3OTgxNjMx
2,720
fix: πŸ› fix two typos
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
closed
false
null
[]
null
0
"2021-07-27T15:50:17Z"
"2021-07-27T18:38:17Z"
"2021-07-27T18:38:16Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2720.diff", "html_url": "https://github.com/huggingface/datasets/pull/2720", "merged_at": "2021-07-27T18:38:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/2720.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2720" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2720/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2720/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2719
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2719/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2719/comments
https://api.github.com/repos/huggingface/datasets/issues/2719/events
https://github.com/huggingface/datasets/issues/2719
953,932,416
MDU6SXNzdWU5NTM5MzI0MTY=
2,719
Use ETag in streaming mode to detect resource updates
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
open
false
null
[]
null
0
"2021-07-27T14:17:09Z"
"2021-10-22T09:36:08Z"
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** I want to cache data I generate from processing a dataset I've loaded in streaming mode, but I've currently no way to know if the remote data has been updated or not, thus I don't know when to invalidate my cache. **Describe the solution you'd like** Take the ETag of the data files into account and provide it (directly or through a hash) to give a signal that I can invalidate my cache. **Describe alternatives you've considered** None
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2719/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2719/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2718
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2718/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2718/comments
https://api.github.com/repos/huggingface/datasets/issues/2718/events
https://github.com/huggingface/datasets/pull/2718
953,360,663
MDExOlB1bGxSZXF1ZXN0Njk3NDE0NTQy
2,718
New documentation structure
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[]
closed
false
null
[]
null
5
"2021-07-26T23:15:13Z"
"2021-09-13T17:20:53Z"
"2021-09-13T17:20:52Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2718.diff", "html_url": "https://github.com/huggingface/datasets/pull/2718", "merged_at": "2021-09-13T17:20:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/2718.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2718" }
Organize Datasets documentation into four documentation types to improve clarity and discoverability of content. **Content to add in the very short term (feel free to add anything I'm missing):** - A discussion on why Datasets uses Arrow that includes some context and background about why we use Arrow. Would also be great to talk about Datasets speed and performance here, and if you can share any benchmarking/tests you did, that would be awesome! Finally, a discussion about how memory-mapping frees the user from RAM constraints would be very helpful. - Explain why you would want to disable or override verifications when loading a dataset. - If possible, include a code sample of when the number of elements in the field of an output dictionary aren’t the same as the other fields in the output dictionary (taken from the [note](https://huggingface.co/docs/datasets/processing.html#augmenting-the-dataset) here).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2718/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2718/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2717/comments
https://api.github.com/repos/huggingface/datasets/issues/2717/events
https://github.com/huggingface/datasets/pull/2717
952,979,976
MDExOlB1bGxSZXF1ZXN0Njk3MDkzNDEx
2,717
Fix shuffle on IterableDataset that disables batching in case any functions were mapped
{ "avatar_url": "https://avatars.githubusercontent.com/u/7098967?v=4", "events_url": "https://api.github.com/users/amankhandelia/events{/privacy}", "followers_url": "https://api.github.com/users/amankhandelia/followers", "following_url": "https://api.github.com/users/amankhandelia/following{/other_user}", "gists_url": "https://api.github.com/users/amankhandelia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amankhandelia", "id": 7098967, "login": "amankhandelia", "node_id": "MDQ6VXNlcjcwOTg5Njc=", "organizations_url": "https://api.github.com/users/amankhandelia/orgs", "received_events_url": "https://api.github.com/users/amankhandelia/received_events", "repos_url": "https://api.github.com/users/amankhandelia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amankhandelia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amankhandelia/subscriptions", "type": "User", "url": "https://api.github.com/users/amankhandelia" }
[]
closed
false
null
[]
null
0
"2021-07-26T14:42:22Z"
"2021-07-26T18:04:14Z"
"2021-07-26T16:30:06Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2717.diff", "html_url": "https://github.com/huggingface/datasets/pull/2717", "merged_at": "2021-07-26T16:30:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/2717.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2717" }
Made a very minor change to fix the issue#2716. Added the missing argument in the constructor call. As discussed in the bug report, the change is made to prevent the `shuffle` method call from resetting the value of `batched` attribute in `MappedExamplesIterable` Fix #2716.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2717/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2717/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2716/comments
https://api.github.com/repos/huggingface/datasets/issues/2716/events
https://github.com/huggingface/datasets/issues/2716
952,902,778
MDU6SXNzdWU5NTI5MDI3Nzg=
2,716
Calling shuffle on IterableDataset will disable batching in case any functions were mapped
{ "avatar_url": "https://avatars.githubusercontent.com/u/7098967?v=4", "events_url": "https://api.github.com/users/amankhandelia/events{/privacy}", "followers_url": "https://api.github.com/users/amankhandelia/followers", "following_url": "https://api.github.com/users/amankhandelia/following{/other_user}", "gists_url": "https://api.github.com/users/amankhandelia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amankhandelia", "id": 7098967, "login": "amankhandelia", "node_id": "MDQ6VXNlcjcwOTg5Njc=", "organizations_url": "https://api.github.com/users/amankhandelia/orgs", "received_events_url": "https://api.github.com/users/amankhandelia/received_events", "repos_url": "https://api.github.com/users/amankhandelia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amankhandelia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amankhandelia/subscriptions", "type": "User", "url": "https://api.github.com/users/amankhandelia" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
3
"2021-07-26T13:24:59Z"
"2021-07-26T18:04:43Z"
"2021-07-26T18:04:43Z"
CONTRIBUTOR
null
null
null
When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False` I did RCA on the dataset codebase, the problem is emerging from [this line of code](https://github.com/huggingface/datasets/blob/d25a0bf94d9f9a9aa6cabdf5b450b9c327d19729/src/datasets/iterable_dataset.py#L197) here as it is `self.ex_iterable.shuffle_data_sources(seed), function=self.function, batch_size=self.batch_size`, as one can see it is missing batched argument, which means that the iterator fallsback to default constructor value, which in this case is `False`. To remedy the problem we can change this line to `self.ex_iterable.shuffle_data_sources(seed), function=self.function, batched=self.batched, batch_size=self.batch_size`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2716/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2716/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2715/comments
https://api.github.com/repos/huggingface/datasets/issues/2715/events
https://github.com/huggingface/datasets/pull/2715
952,845,229
MDExOlB1bGxSZXF1ZXN0Njk2OTc5MjQ1
2,715
Update PAN-X data URL in XTREME dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2021-07-26T12:21:17Z"
"2021-07-26T13:27:59Z"
"2021-07-26T13:27:59Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2715.diff", "html_url": "https://github.com/huggingface/datasets/pull/2715", "merged_at": "2021-07-26T13:27:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/2715.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2715" }
Related to #2710, #2691.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2715/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2715/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2714/comments
https://api.github.com/repos/huggingface/datasets/issues/2714/events
https://github.com/huggingface/datasets/issues/2714
952,580,820
MDU6SXNzdWU5NTI1ODA4MjA=
2,714
add more precise information for size
{ "avatar_url": "https://avatars.githubusercontent.com/u/1493902?v=4", "events_url": "https://api.github.com/users/pennyl67/events{/privacy}", "followers_url": "https://api.github.com/users/pennyl67/followers", "following_url": "https://api.github.com/users/pennyl67/following{/other_user}", "gists_url": "https://api.github.com/users/pennyl67/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pennyl67", "id": 1493902, "login": "pennyl67", "node_id": "MDQ6VXNlcjE0OTM5MDI=", "organizations_url": "https://api.github.com/users/pennyl67/orgs", "received_events_url": "https://api.github.com/users/pennyl67/received_events", "repos_url": "https://api.github.com/users/pennyl67/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pennyl67/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pennyl67/subscriptions", "type": "User", "url": "https://api.github.com/users/pennyl67" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
1
"2021-07-26T07:11:03Z"
"2021-07-26T09:16:25Z"
null
NONE
null
null
null
For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a regex for existing datasets.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2714/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2714/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2713/comments
https://api.github.com/repos/huggingface/datasets/issues/2713/events
https://github.com/huggingface/datasets/pull/2713
952,515,256
MDExOlB1bGxSZXF1ZXN0Njk2Njk3MzU0
2,713
Enumerate all ner_tags values in WNUT 17 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-07-26T05:22:16Z"
"2021-07-26T09:30:55Z"
"2021-07-26T09:30:55Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2713.diff", "html_url": "https://github.com/huggingface/datasets/pull/2713", "merged_at": "2021-07-26T09:30:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/2713.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2713" }
This PR does: - Enumerate all ner_tags in dataset card Data Fields section - Add all metadata tags to dataset card Close #2709.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2713/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2713/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2710/comments
https://api.github.com/repos/huggingface/datasets/issues/2710/events
https://github.com/huggingface/datasets/pull/2710
951,723,326
MDExOlB1bGxSZXF1ZXN0Njk2MDYyNjAy
2,710
Update WikiANN data URL
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
1
"2021-07-23T16:29:21Z"
"2021-07-26T09:34:23Z"
"2021-07-26T09:34:23Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2710.diff", "html_url": "https://github.com/huggingface/datasets/pull/2710", "merged_at": "2021-07-26T09:34:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/2710.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2710" }
WikiANN data source URL is no longer accessible: 404 error from Dropbox. We have decided to host it at Hugging Face. This PR updates the data source URL, the metadata JSON file and the dataset card. Close #2691.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2710/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2710/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2709/comments
https://api.github.com/repos/huggingface/datasets/issues/2709/events
https://github.com/huggingface/datasets/issues/2709
951,534,757
MDU6SXNzdWU5NTE1MzQ3NTc=
2,709
Missing documentation for wnut_17 (ner_tags)
{ "avatar_url": "https://avatars.githubusercontent.com/u/31095360?v=4", "events_url": "https://api.github.com/users/maxpel/events{/privacy}", "followers_url": "https://api.github.com/users/maxpel/followers", "following_url": "https://api.github.com/users/maxpel/following{/other_user}", "gists_url": "https://api.github.com/users/maxpel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maxpel", "id": 31095360, "login": "maxpel", "node_id": "MDQ6VXNlcjMxMDk1MzYw", "organizations_url": "https://api.github.com/users/maxpel/orgs", "received_events_url": "https://api.github.com/users/maxpel/received_events", "repos_url": "https://api.github.com/users/maxpel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maxpel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maxpel/subscriptions", "type": "User", "url": "https://api.github.com/users/maxpel" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
1
"2021-07-23T12:25:32Z"
"2021-07-26T09:30:55Z"
"2021-07-26T09:30:55Z"
CONTRIBUTOR
null
null
null
On the info page of the wnut_17 data set (https://huggingface.co/datasets/wnut_17), the model output of ner-tags is only documented for these 5 cases: `ner_tags: a list of classification labels, with possible values including O (0), B-corporation (1), I-corporation (2), B-creative-work (3), I-creative-work (4).` I trained a model with the data and it gives me 13 classes: ``` "id2label": { "0": 0, "1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9, "10": 10, "11": 11, "12": 12 } "label2id": { "0": 0, "1": 1, "10": 10, "11": 11, "12": 12, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9 } ``` The paper (https://www.aclweb.org/anthology/W17-4418.pdf) explains those 6 categories, but the ordering does not match: ``` 1. person 2. location (including GPE, facility) 3. corporation 4. product (tangible goods, or well-defined services) 5. creative-work (song, movie, book and so on) 6. group (subsuming music band, sports team, and non-corporate organisations) ``` I would be very helpful for me, if somebody could clarify the model ouputs and explain the "B-" and "I-" prefixes to me. Really great work with that and the other packages, I couldn't believe that training the model with that data was basically a one-liner!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2709/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2709/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2708/comments
https://api.github.com/repos/huggingface/datasets/issues/2708/events
https://github.com/huggingface/datasets/issues/2708
951,092,660
MDU6SXNzdWU5NTEwOTI2NjA=
2,708
QASC: incomplete training set
{ "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danyaljj", "id": 2441454, "login": "danyaljj", "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "repos_url": "https://api.github.com/users/danyaljj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "type": "User", "url": "https://api.github.com/users/danyaljj" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
2
"2021-07-22T21:59:44Z"
"2021-07-23T13:30:07Z"
"2021-07-23T13:30:07Z"
CONTRIBUTOR
null
null
null
## Describe the bug The training instances are not loaded properly. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("qasc", script_version='1.10.2') def load_instances(split): instances = dataset[split] print(f"split: {split} - size: {len(instances)}") for x in instances: print(json.dumps(x)) load_instances('test') load_instances('validation') load_instances('train') ``` ## results For test and validation, we can see the examples in the output (which is good!): ``` split: test - size: 920 {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Anthax", "under water", "uterus", "wombs", "two", "moles", "live", "embryo"]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "What type of birth do therian mammals have? (A) Anthax (B) under water (C) uterus (D) wombs (E) two (F) moles (G) live (H) embryo", "id": "3C44YUNSI1OBFBB8D36GODNOZN9DPA", "question": "What type of birth do therian mammals have?"} {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Corvidae", "arthropods", "birds", "backbones", "keratin", "Jurassic", "front paws", "Parakeets."]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "By what time had mouse-sized viviparous mammals evolved? (A) Corvidae (B) arthropods (C) birds (D) backbones (E) keratin (F) Jurassic (G) front paws (H) Parakeets.", "id": "3B1NLC6UGZVERVLZFT7OUYQLD1SGPZ", "question": "By what time had mouse-sized viviparous mammals evolved?"} {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Reduced friction", "causes infection", "vital to a good life", "prevents water loss", "camouflage from consumers", "Protection against predators", "spur the growth of the plant", "a smooth surface"]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "What does a plant's skin do? (A) Reduced friction (B) causes infection (C) vital to a good life (D) prevents water loss (E) camouflage from consumers (F) Protection against predators (G) spur the growth of the plant (H) a smooth surface", "id": "3QRYMNZ7FYGITFVSJET3PS0F4S0NT9", "question": "What does a plant's skin do?"} ... ``` However, only a few instances are loaded for the training split, which is not correct. ## Environment info - `datasets` version: '1.10.2' - Platform: MaxOS - Python version:3.7 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2708/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2708/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2707
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2707/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2707/comments
https://api.github.com/repos/huggingface/datasets/issues/2707/events
https://github.com/huggingface/datasets/issues/2707
950,812,945
MDU6SXNzdWU5NTA4MTI5NDU=
2,707
404 Not Found Error when loading LAMA dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/26467159?v=4", "events_url": "https://api.github.com/users/dwil2444/events{/privacy}", "followers_url": "https://api.github.com/users/dwil2444/followers", "following_url": "https://api.github.com/users/dwil2444/following{/other_user}", "gists_url": "https://api.github.com/users/dwil2444/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dwil2444", "id": 26467159, "login": "dwil2444", "node_id": "MDQ6VXNlcjI2NDY3MTU5", "organizations_url": "https://api.github.com/users/dwil2444/orgs", "received_events_url": "https://api.github.com/users/dwil2444/received_events", "repos_url": "https://api.github.com/users/dwil2444/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dwil2444/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwil2444/subscriptions", "type": "User", "url": "https://api.github.com/users/dwil2444" }
[]
closed
false
null
[]
null
3
"2021-07-22T15:52:33Z"
"2021-07-26T14:29:07Z"
"2021-07-26T14:29:07Z"
NONE
null
null
null
The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download: Steps to Reproduce: 1. `from datasets import load_dataset` 2. `dataset = load_dataset('lama', 'trex')`. Results: `FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/lama/lama.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/lama/lama.py`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2707/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2707/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2706/comments
https://api.github.com/repos/huggingface/datasets/issues/2706/events
https://github.com/huggingface/datasets/pull/2706
950,606,561
MDExOlB1bGxSZXF1ZXN0Njk1MTI3ODgz
2,706
Update BibTeX entry
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-07-22T12:29:29Z"
"2021-07-22T12:43:00Z"
"2021-07-22T12:43:00Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2706.diff", "html_url": "https://github.com/huggingface/datasets/pull/2706", "merged_at": "2021-07-22T12:43:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/2706.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2706" }
Update BibTeX entry.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2706/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2706/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2705/comments
https://api.github.com/repos/huggingface/datasets/issues/2705/events
https://github.com/huggingface/datasets/issues/2705
950,488,583
MDU6SXNzdWU5NTA0ODg1ODM=
2,705
404 not found error on loading WIKIANN dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/39296659?v=4", "events_url": "https://api.github.com/users/ronbutan/events{/privacy}", "followers_url": "https://api.github.com/users/ronbutan/followers", "following_url": "https://api.github.com/users/ronbutan/following{/other_user}", "gists_url": "https://api.github.com/users/ronbutan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ronbutan", "id": 39296659, "login": "ronbutan", "node_id": "MDQ6VXNlcjM5Mjk2NjU5", "organizations_url": "https://api.github.com/users/ronbutan/orgs", "received_events_url": "https://api.github.com/users/ronbutan/received_events", "repos_url": "https://api.github.com/users/ronbutan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ronbutan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ronbutan/subscriptions", "type": "User", "url": "https://api.github.com/users/ronbutan" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
1
"2021-07-22T09:55:50Z"
"2021-07-23T08:07:32Z"
"2021-07-23T08:07:32Z"
NONE
null
null
null
## Describe the bug Unable to retreive wikiann English dataset ## Steps to reproduce the bug ```python from datasets import list_datasets, load_dataset, list_metrics, load_metric WIKIANN = load_dataset("wikiann","en") ``` ## Expected results Colab notebook should display successful download status ## Actual results FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/12h3qqog6q4bjve/panx_dataset.tar?dl=1 ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2705/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2705/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2704/comments
https://api.github.com/repos/huggingface/datasets/issues/2704/events
https://github.com/huggingface/datasets/pull/2704
950,483,980
MDExOlB1bGxSZXF1ZXN0Njk1MDIzMTEz
2,704
Fix pick default config name message
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2021-07-22T09:49:43Z"
"2021-07-22T10:02:41Z"
"2021-07-22T10:02:40Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2704.diff", "html_url": "https://github.com/huggingface/datasets/pull/2704", "merged_at": "2021-07-22T10:02:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/2704.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2704" }
The error message to tell which config name to load is not displayed. This is because in the code it was considering the config kwargs to be non-empty, which is a special case for custom configs created on the fly. It appears after this change: https://github.com/huggingface/datasets/pull/2659 I fixed that by making the config kwargs empty by default, even if default parameters are passed Fix https://github.com/huggingface/datasets/issues/2703
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2704/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2704/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2703/comments
https://api.github.com/repos/huggingface/datasets/issues/2703/events
https://github.com/huggingface/datasets/issues/2703
950,482,284
MDU6SXNzdWU5NTA0ODIyODQ=
2,703
Bad message when config name is missing
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
0
"2021-07-22T09:47:23Z"
"2021-07-22T10:02:40Z"
"2021-07-22T10:02:40Z"
MEMBER
null
null
null
When loading a dataset that have several configurations, we expect to see an error message if the user doesn't specify a config name. However in `datasets` 1.10.0 and 1.10.1 it doesn't show the right message: ```python import datasets datasets.load_dataset("glue") ``` raises ```python AttributeError: 'BuilderConfig' object has no attribute 'text_features' ``` instead of ```python ValueError: Config name is missing. Please pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax'] Example of usage: `load_dataset('glue', 'cola')` ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2703/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2703/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2702/comments
https://api.github.com/repos/huggingface/datasets/issues/2702/events
https://github.com/huggingface/datasets/pull/2702
950,448,159
MDExOlB1bGxSZXF1ZXN0Njk0OTkyOTc1
2,702
Update BibTeX entry
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-07-22T09:04:39Z"
"2021-07-22T09:17:39Z"
"2021-07-22T09:17:38Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2702.diff", "html_url": "https://github.com/huggingface/datasets/pull/2702", "merged_at": "2021-07-22T09:17:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/2702.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2702" }
Update BibTeX entry.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2702/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2702/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2701/comments
https://api.github.com/repos/huggingface/datasets/issues/2701/events
https://github.com/huggingface/datasets/pull/2701
950,422,403
MDExOlB1bGxSZXF1ZXN0Njk0OTcxMzM3
2,701
Fix download_mode docstrings
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
0
"2021-07-22T08:30:25Z"
"2021-07-22T09:33:31Z"
"2021-07-22T09:33:31Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2701.diff", "html_url": "https://github.com/huggingface/datasets/pull/2701", "merged_at": "2021-07-22T09:33:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/2701.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2701" }
Fix `download_mode` docstrings.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2701/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2701/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2700/comments
https://api.github.com/repos/huggingface/datasets/issues/2700/events
https://github.com/huggingface/datasets/issues/2700
950,276,325
MDU6SXNzdWU5NTAyNzYzMjU=
2,700
from datasets import Dataset is failing
{ "avatar_url": "https://avatars.githubusercontent.com/u/5582286?v=4", "events_url": "https://api.github.com/users/kswamy15/events{/privacy}", "followers_url": "https://api.github.com/users/kswamy15/followers", "following_url": "https://api.github.com/users/kswamy15/following{/other_user}", "gists_url": "https://api.github.com/users/kswamy15/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kswamy15", "id": 5582286, "login": "kswamy15", "node_id": "MDQ6VXNlcjU1ODIyODY=", "organizations_url": "https://api.github.com/users/kswamy15/orgs", "received_events_url": "https://api.github.com/users/kswamy15/received_events", "repos_url": "https://api.github.com/users/kswamy15/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kswamy15/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kswamy15/subscriptions", "type": "User", "url": "https://api.github.com/users/kswamy15" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
1
"2021-07-22T03:51:23Z"
"2021-07-22T07:23:45Z"
"2021-07-22T07:09:07Z"
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import Dataset ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in <module>() 25 import posixpath 26 import requests ---> 27 from tqdm.contrib.concurrent import thread_map 28 29 from .. import __version__, config, utils ModuleNotFoundError: No module named 'tqdm.contrib.concurrent' --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: latest version as of 07/21/2021 - Platform: Google Colab - Python version: 3.7 - PyArrow version:
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2700/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2700/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2699/comments
https://api.github.com/repos/huggingface/datasets/issues/2699/events
https://github.com/huggingface/datasets/issues/2699
950,221,226
MDU6SXNzdWU5NTAyMjEyMjY=
2,699
cannot combine splits merging and streaming?
{ "avatar_url": "https://avatars.githubusercontent.com/u/4436747?v=4", "events_url": "https://api.github.com/users/eyaler/events{/privacy}", "followers_url": "https://api.github.com/users/eyaler/followers", "following_url": "https://api.github.com/users/eyaler/following{/other_user}", "gists_url": "https://api.github.com/users/eyaler/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eyaler", "id": 4436747, "login": "eyaler", "node_id": "MDQ6VXNlcjQ0MzY3NDc=", "organizations_url": "https://api.github.com/users/eyaler/orgs", "received_events_url": "https://api.github.com/users/eyaler/received_events", "repos_url": "https://api.github.com/users/eyaler/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eyaler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eyaler/subscriptions", "type": "User", "url": "https://api.github.com/users/eyaler" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
1
"2021-07-22T01:13:25Z"
"2021-07-22T08:27:47Z"
null
NONE
null
null
null
this does not work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation',streaming=True)` with error: `ValueError: Bad split: train+validation. Available splits: ['train', 'validation']` these work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation')` `dataset = datasets.load_dataset('mc4','iw',split='train',streaming=True)` `dataset = datasets.load_dataset('mc4','iw',split='validation',streaming=True)` i could not find a reference to this in the documentation and the error message is confusing. also would be nice to allow streaming for the merged splits
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2699/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2699/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2698
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2698/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2698/comments
https://api.github.com/repos/huggingface/datasets/issues/2698/events
https://github.com/huggingface/datasets/pull/2698
950,159,867
MDExOlB1bGxSZXF1ZXN0Njk0NzUxMzMw
2,698
Ignore empty batch when writing
{ "avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4", "events_url": "https://api.github.com/users/pcuenca/events{/privacy}", "followers_url": "https://api.github.com/users/pcuenca/followers", "following_url": "https://api.github.com/users/pcuenca/following{/other_user}", "gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pcuenca", "id": 1177582, "login": "pcuenca", "node_id": "MDQ6VXNlcjExNzc1ODI=", "organizations_url": "https://api.github.com/users/pcuenca/orgs", "received_events_url": "https://api.github.com/users/pcuenca/received_events", "repos_url": "https://api.github.com/users/pcuenca/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions", "type": "User", "url": "https://api.github.com/users/pcuenca" }
[]
closed
false
null
[]
null
0
"2021-07-21T22:35:30Z"
"2021-07-26T14:56:03Z"
"2021-07-26T13:25:26Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2698.diff", "html_url": "https://github.com/huggingface/datasets/pull/2698", "merged_at": "2021-07-26T13:25:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/2698.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2698" }
This prevents an schema update with unknown column types, as reported in #2644. This is my first attempt at fixing the issue. I tested the following: - First batch returned by a batched map operation is empty. - An intermediate batch is empty. - `python -m unittest tests.test_arrow_writer` passes. However, `arrow_writer` looks like a pretty generic interface, I'm not sure if there are other uses I may have overlooked. Let me know if that's the case, or if a better approach would be preferable.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2698/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2698/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2697/comments
https://api.github.com/repos/huggingface/datasets/issues/2697/events
https://github.com/huggingface/datasets/pull/2697
950,021,623
MDExOlB1bGxSZXF1ZXN0Njk0NjMyODg0
2,697
Fix import on Colab
{ "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateraw", "id": 32437151, "login": "nateraw", "node_id": "MDQ6VXNlcjMyNDM3MTUx", "organizations_url": "https://api.github.com/users/nateraw/orgs", "received_events_url": "https://api.github.com/users/nateraw/received_events", "repos_url": "https://api.github.com/users/nateraw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "type": "User", "url": "https://api.github.com/users/nateraw" }
[]
closed
false
null
[]
null
1
"2021-07-21T19:03:38Z"
"2021-07-22T07:09:08Z"
"2021-07-22T07:09:07Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2697.diff", "html_url": "https://github.com/huggingface/datasets/pull/2697", "merged_at": "2021-07-22T07:09:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/2697.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2697" }
Fix #2695, fix #2700.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2697/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2697/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2696/comments
https://api.github.com/repos/huggingface/datasets/issues/2696/events
https://github.com/huggingface/datasets/pull/2696
949,901,726
MDExOlB1bGxSZXF1ZXN0Njk0NTMwODg3
2,696
Add support for disable_progress_bar on Windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
1
"2021-07-21T16:34:53Z"
"2021-07-26T13:31:14Z"
"2021-07-26T09:38:37Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2696.diff", "html_url": "https://github.com/huggingface/datasets/pull/2696", "merged_at": "2021-07-26T09:38:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/2696.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2696" }
This PR is a continuation of #2667 and adds support for `utils.disable_progress_bar()` on Windows when using multiprocessing. This [answer](https://stackoverflow.com/a/6596695/14095927) on SO explains it nicely why the current approach (with calling `utils.is_progress_bar_enabled()` inside `Dataset._map_single`) would not work on Windows.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2696/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2696/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2695
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2695/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2695/comments
https://api.github.com/repos/huggingface/datasets/issues/2695/events
https://github.com/huggingface/datasets/issues/2695
949,864,823
MDU6SXNzdWU5NDk4NjQ4MjM=
2,695
Cannot import load_dataset on Colab
{ "avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4", "events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}", "followers_url": "https://api.github.com/users/bayartsogt-ya/followers", "following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}", "gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bayartsogt-ya", "id": 43239645, "login": "bayartsogt-ya", "node_id": "MDQ6VXNlcjQzMjM5NjQ1", "organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs", "received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events", "repos_url": "https://api.github.com/users/bayartsogt-ya/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions", "type": "User", "url": "https://api.github.com/users/bayartsogt-ya" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
5
"2021-07-21T15:52:51Z"
"2021-07-22T07:26:25Z"
"2021-07-22T07:09:07Z"
NONE
null
null
null
## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On colab: ```python !pip install datasets from datasets import load_dataset ``` ## Expected results Works without error ## Actual results Specify the actual results or traceback. ``` ModuleNotFoundError Traceback (most recent call last) <ipython-input-2-8cc7de4c69eb> in <module>() ----> 1 from datasets import load_dataset, load_metric, Metric, MetricInfo, Features, Value 2 from sklearn.metrics import mean_squared_error /usr/local/lib/python3.7/dist-packages/datasets/__init__.py in <module>() 31 ) 32 ---> 33 from .arrow_dataset import Dataset, concatenate_datasets 34 from .arrow_reader import ArrowReader, ReadInstruction 35 from .arrow_writer import ArrowWriter /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in <module>() 40 from tqdm.auto import tqdm 41 ---> 42 from datasets.tasks.text_classification import TextClassification 43 44 from . import config, utils /usr/local/lib/python3.7/dist-packages/datasets/tasks/__init__.py in <module>() 1 from typing import Optional 2 ----> 3 from ..utils.logging import get_logger 4 from .automatic_speech_recognition import AutomaticSpeechRecognition 5 from .base import TaskTemplate /usr/local/lib/python3.7/dist-packages/datasets/utils/__init__.py in <module>() 19 20 from . import logging ---> 21 from .download_manager import DownloadManager, GenerateMode 22 from .file_utils import DownloadConfig, cached_path, hf_bucket_url, is_remote_url, temp_seed 23 from .mock_download_manager import MockDownloadManager /usr/local/lib/python3.7/dist-packages/datasets/utils/download_manager.py in <module>() 24 25 from .. import config ---> 26 from .file_utils import ( 27 DownloadConfig, 28 cached_path, /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in <module>() 25 import posixpath 26 import requests ---> 27 from tqdm.contrib.concurrent import thread_map 28 29 from .. import __version__, config, utils ModuleNotFoundError: No module named 'tqdm.contrib.concurrent' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.0 - Platform: Colab - Python version: 3.7.11 - PyArrow version: 3.0.0
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/2695/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2695/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2694/comments
https://api.github.com/repos/huggingface/datasets/issues/2694/events
https://github.com/huggingface/datasets/pull/2694
949,844,722
MDExOlB1bGxSZXF1ZXN0Njk0NDg0NTcy
2,694
fix: πŸ› change string format to allow copy/paste to work in bash
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
closed
false
null
[]
null
0
"2021-07-21T15:30:40Z"
"2021-07-22T10:41:47Z"
"2021-07-22T10:41:47Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2694.diff", "html_url": "https://github.com/huggingface/datasets/pull/2694", "merged_at": "2021-07-22T10:41:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/2694.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2694" }
Before: copy/paste resulted in an error because the square bracket characters `[]` are special characters in bash
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2694/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2694/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2693/comments
https://api.github.com/repos/huggingface/datasets/issues/2693/events
https://github.com/huggingface/datasets/pull/2693
949,797,014
MDExOlB1bGxSZXF1ZXN0Njk0NDQ1ODAz
2,693
Fix OSCAR Esperanto
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2021-07-21T14:43:50Z"
"2021-07-21T14:53:52Z"
"2021-07-21T14:53:51Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2693.diff", "html_url": "https://github.com/huggingface/datasets/pull/2693", "merged_at": "2021-07-21T14:53:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/2693.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2693" }
The Esperanto part (original) of OSCAR has the wrong number of examples: ```python from datasets import load_dataset raw_datasets = load_dataset("oscar", "unshuffled_original_eo") ``` raises ```python NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=314188336, num_examples=121171, dataset_name='oscar'), 'recorded': SplitInfo(name='train', num_bytes=314064514, num_examples=121168, dataset_name='oscar')}] ``` I updated the number of expected examples in dataset_infos.json cc @sgugger
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2693/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2693/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2692
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2692/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2692/comments
https://api.github.com/repos/huggingface/datasets/issues/2692/events
https://github.com/huggingface/datasets/pull/2692
949,765,484
MDExOlB1bGxSZXF1ZXN0Njk0NDE4MDg1
2,692
Update BibTeX entry
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-07-21T14:23:35Z"
"2021-07-21T15:31:41Z"
"2021-07-21T15:31:40Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2692.diff", "html_url": "https://github.com/huggingface/datasets/pull/2692", "merged_at": "2021-07-21T15:31:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/2692.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2692" }
Update BibTeX entry
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2692/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2692/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2691/comments
https://api.github.com/repos/huggingface/datasets/issues/2691/events
https://github.com/huggingface/datasets/issues/2691
949,758,379
MDU6SXNzdWU5NDk3NTgzNzk=
2,691
xtreme / pan-x cannot be downloaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
5
"2021-07-21T14:18:05Z"
"2021-07-26T09:34:22Z"
"2021-07-26T09:34:22Z"
CONTRIBUTOR
null
null
null
## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Actual results ``` FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/12h3qqog6q4bjve/panx_dataset.tar?dl=1 ``` ## Environment info - `datasets` version: 1.9.0 - Platform: macOS-11.4-x86_64-i386-64bit - Python version: 3.8.11 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2691/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2691/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2690/comments
https://api.github.com/repos/huggingface/datasets/issues/2690/events
https://github.com/huggingface/datasets/pull/2690
949,574,500
MDExOlB1bGxSZXF1ZXN0Njk0MjU5MDc1
2,690
Docs details
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
closed
false
null
[]
null
1
"2021-07-21T10:43:14Z"
"2021-07-27T18:40:54Z"
"2021-07-27T18:40:54Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2690.diff", "html_url": "https://github.com/huggingface/datasets/pull/2690", "merged_at": "2021-07-27T18:40:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/2690.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2690" }
Some comments here: - the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch 'datasets[streaming]'` (maybe just link to https://huggingface.co/docs/datasets/installation.html + a one-liner that installs all the requirements / alternatively a requirements.txt file) - "If you’d like to play with the examples, you must install it from source." in https://huggingface.co/docs/datasets/installation.html: it's not clear to me what this means (what are these "examples"?) - in https://huggingface.co/docs/datasets/loading_datasets.html: "or AWS bucket if it’s not already stored in the library". It's the only place in the doc (aside from the docstring https://huggingface.co/docs/datasets/package_reference/loading_methods.html?highlight=aws bucket#datasets.list_datasets) where the "AWS bucket" is mentioned. It's not easy to understand what this means. Maybe explain more, and link to https://s3.amazonaws.com/datasets.huggingface.co and/or https://huggingface.co/docs/datasets/filesystems.html. - example in https://huggingface.co/docs/datasets/loading_datasets.html#manually-downloading-files is obsoleted by https://github.com/huggingface/datasets/pull/2326. Also: see https://github.com/huggingface/datasets/issues/2691 for a bug on this specific dataset. - in https://huggingface.co/docs/datasets/loading_datasets.html#manually-downloading-files the doc says "After you’ve downloaded the files, you can point to the folder hosting them locally with the data_dir argument as follows:", but the following example does not show how to use `data_dir` - in https://huggingface.co/docs/datasets/loading_datasets.html#csv-files, it would be nice to have an URL to the csv loader reference (but I'm not sure there is one in the API reference). This comment applies in many places in the doc: I would want the API reference to contain doc for all the code/functions/classes... and I would want a lot more links inside the doc pointing to the API entries. - in the API reference (docstrings) I would prefer "SOURCE" to link to github instead of a copy of the code inside the docs site (eg. https://github.com/huggingface/datasets/blob/master/src/datasets/load.py#L711 instead of https://huggingface.co/docs/datasets/_modules/datasets/load.html#load_dataset) - it seems like not all the API is exposed in the doc. For example, there is no doc for [`disable_progress_bar`](https://github.com/huggingface/datasets/search?q=disable_progress_bar), see https://huggingface.co/docs/datasets/search.html?q=disable_progress_bar, even if the code contains docstrings. Does it mean that the function is not officially supported? (otherwise, maybe it also deserves a mention in https://huggingface.co/docs/datasets/package_reference/logging_methods.html) - in https://huggingface.co/docs/datasets/loading_datasets.html?highlight=most%20efficient%20format%20have%20json%20files%20consisting%20multiple%20json%20objects#json-files, "The most efficient format is to have JSON files consisting of multiple JSON objects, one per line, representing individual data rows:", maybe link to https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON and give it a name ("line-delimited JSON"? "JSON Lines" as in https://huggingface.co/docs/datasets/processing.html#exporting-a-dataset-to-csv-json-parquet-or-to-python-objects ?) - in https://huggingface.co/docs/datasets/loading_datasets.html, for the local files sections, it would be nice to provide sample csv / json / text files to download, so that it's easier for the reader to try to load them (instead: they won't try) - the doc explains how to shard a dataset, but does not explain why and when a dataset should be sharded (I have no idea... for [parallelizing](https://huggingface.co/docs/datasets/processing.html#multiprocessing)?). It does neither give an idea of the number of shards a dataset typically should have and why. - the code example in https://huggingface.co/docs/datasets/processing.html#mapping-in-a-distributed-setting does not work, because `training_args` has not been defined before in the doc.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2690/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2690/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2689/comments
https://api.github.com/repos/huggingface/datasets/issues/2689/events
https://github.com/huggingface/datasets/issues/2689
949,447,104
MDU6SXNzdWU5NDk0NDcxMDQ=
2,689
cannot save the dataset to disk after rename_column
{ "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PaulLerner", "id": 25532159, "login": "PaulLerner", "node_id": "MDQ6VXNlcjI1NTMyMTU5", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "repos_url": "https://api.github.com/users/PaulLerner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "type": "User", "url": "https://api.github.com/users/PaulLerner" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
4
"2021-07-21T08:13:40Z"
"2023-11-02T14:54:00Z"
"2021-07-21T13:11:04Z"
CONTRIBUTOR
null
null
null
## Describe the bug If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug In [1]: from datasets import Dataset, load_from_disk In [5]: dataset=Dataset.from_dict({'foo': [0]}) In [7]: dataset.save_to_disk('foo') In [8]: dataset=load_from_disk('foo') In [10]: dataset=dataset.rename_column('foo', 'bar') In [11]: dataset.save_to_disk('foo') --------------------------------------------------------------------------- PermissionError Traceback (most recent call last) <ipython-input-11-a3bc0d4fc339> in <module> ----> 1 dataset.save_to_disk('foo') /mnt/beegfs/projects/meerqat/anaconda3/envs/meerqat/lib/python3.7/site-packages/datasets/arrow_dataset.py in save_to_disk(self, dataset_path , fs) 597 if Path(dataset_path, config.DATASET_ARROW_FILENAME) in cache_files_paths: 598 raise PermissionError( --> 599 f"Tried to overwrite {Path(dataset_path, config.DATASET_ARROW_FILENAME)} but a dataset can't overwrite itself." 600 ) 601 if Path(dataset_path, config.DATASET_INDICES_FILENAME) in cache_files_paths: PermissionError: Tried to overwrite foo/dataset.arrow but a dataset can't overwrite itself. ``` N. B. I created the dataset from dict to enable easy reproduction but the same happens if you load an existing dataset (e.g. starting from `In [8]`) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-3.10.0-1160.11.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2689/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2689/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2688/comments
https://api.github.com/repos/huggingface/datasets/issues/2688/events
https://github.com/huggingface/datasets/issues/2688
949,182,074
MDU6SXNzdWU5NDkxODIwNzQ=
2,688
hebrew language codes he and iw should be treated as aliases
{ "avatar_url": "https://avatars.githubusercontent.com/u/4436747?v=4", "events_url": "https://api.github.com/users/eyaler/events{/privacy}", "followers_url": "https://api.github.com/users/eyaler/followers", "following_url": "https://api.github.com/users/eyaler/following{/other_user}", "gists_url": "https://api.github.com/users/eyaler/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eyaler", "id": 4436747, "login": "eyaler", "node_id": "MDQ6VXNlcjQ0MzY3NDc=", "organizations_url": "https://api.github.com/users/eyaler/orgs", "received_events_url": "https://api.github.com/users/eyaler/received_events", "repos_url": "https://api.github.com/users/eyaler/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eyaler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eyaler/subscriptions", "type": "User", "url": "https://api.github.com/users/eyaler" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
2
"2021-07-20T23:13:52Z"
"2021-07-21T16:34:53Z"
"2021-07-21T16:34:53Z"
NONE
null
null
null
https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2688/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2688/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2687/comments
https://api.github.com/repos/huggingface/datasets/issues/2687/events
https://github.com/huggingface/datasets/pull/2687
948,890,481
MDExOlB1bGxSZXF1ZXN0NjkzNjY1NDI2
2,687
Minor documentation fix
{ "avatar_url": "https://avatars.githubusercontent.com/u/44175589?v=4", "events_url": "https://api.github.com/users/slowwavesleep/events{/privacy}", "followers_url": "https://api.github.com/users/slowwavesleep/followers", "following_url": "https://api.github.com/users/slowwavesleep/following{/other_user}", "gists_url": "https://api.github.com/users/slowwavesleep/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/slowwavesleep", "id": 44175589, "login": "slowwavesleep", "node_id": "MDQ6VXNlcjQ0MTc1NTg5", "organizations_url": "https://api.github.com/users/slowwavesleep/orgs", "received_events_url": "https://api.github.com/users/slowwavesleep/received_events", "repos_url": "https://api.github.com/users/slowwavesleep/repos", "site_admin": false, "starred_url": "https://api.github.com/users/slowwavesleep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slowwavesleep/subscriptions", "type": "User", "url": "https://api.github.com/users/slowwavesleep" }
[]
closed
false
null
[]
null
0
"2021-07-20T17:43:23Z"
"2021-07-21T13:04:55Z"
"2021-07-21T13:04:55Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2687.diff", "html_url": "https://github.com/huggingface/datasets/pull/2687", "merged_at": "2021-07-21T13:04:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/2687.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2687" }
Currently, [Writing a dataset loading script](https://huggingface.co/docs/datasets/add_dataset.html) page has a small error. A link to `matinf` dataset in [_Dataset scripts of reference_](https://huggingface.co/docs/datasets/add_dataset.html#dataset-scripts-of-reference) section actually leads to `xsquad`, instead. This PR fixes that.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2687/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2687/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2686/comments
https://api.github.com/repos/huggingface/datasets/issues/2686/events
https://github.com/huggingface/datasets/pull/2686
948,811,669
MDExOlB1bGxSZXF1ZXN0NjkzNTk4OTE3
2,686
Fix bad config ids that name cache directories
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2021-07-20T16:00:45Z"
"2021-07-20T16:27:15Z"
"2021-07-20T16:27:15Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2686.diff", "html_url": "https://github.com/huggingface/datasets/pull/2686", "merged_at": "2021-07-20T16:27:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/2686.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2686" }
`data_dir=None` was considered a dataset config parameter, hence creating a special config_id for all dataset being loaded. Since the config_id is used to name the cache directories, this leaded to datasets being regenerated for users. I fixed this by ignoring the value of `data_dir` when it's `None` when computing the config_id. I also added a test to make sure the cache directories are not unexpectedly renamed in the future. Fix https://github.com/huggingface/datasets/issues/2683
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2686/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2686/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2685/comments
https://api.github.com/repos/huggingface/datasets/issues/2685/events
https://github.com/huggingface/datasets/pull/2685
948,791,572
MDExOlB1bGxSZXF1ZXN0NjkzNTgxNTk2
2,685
Fix Blog Authorship Corpus dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
3
"2021-07-20T15:44:50Z"
"2021-07-21T13:11:58Z"
"2021-07-21T13:11:58Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2685.diff", "html_url": "https://github.com/huggingface/datasets/pull/2685", "merged_at": "2021-07-21T13:11:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/2685.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2685" }
This PR: - Update the JSON metadata file, which previously was raising a `NonMatchingSplitsSizesError` - Fix the codec of the data files (`latin_1` instead of `utf-8`), which previously was raising ` UnicodeDecodeError` for some files Close #2679.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2685/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2685/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2684
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2684/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2684/comments
https://api.github.com/repos/huggingface/datasets/issues/2684/events
https://github.com/huggingface/datasets/pull/2684
948,771,753
MDExOlB1bGxSZXF1ZXN0NjkzNTY0MDY4
2,684
Print absolute local paths in load_dataset error messages
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
0
"2021-07-20T15:28:28Z"
"2021-07-22T20:48:19Z"
"2021-07-22T14:01:10Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2684.diff", "html_url": "https://github.com/huggingface/datasets/pull/2684", "merged_at": "2021-07-22T14:01:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/2684.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2684" }
Use absolute local paths in the error messages of `load_dataset` as per @stas00's suggestion in https://github.com/huggingface/datasets/pull/2500#issuecomment-874891223
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2684/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2684/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2683/comments
https://api.github.com/repos/huggingface/datasets/issues/2683/events
https://github.com/huggingface/datasets/issues/2683
948,721,379
MDU6SXNzdWU5NDg3MjEzNzk=
2,683
Cache directories changed due to recent changes in how config kwargs are handled
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
0
"2021-07-20T14:37:57Z"
"2021-07-20T16:27:15Z"
"2021-07-20T16:27:15Z"
MEMBER
null
null
null
Since #2659 I can see weird cache directory names with hashes in the config id, even though no additional config kwargs are passed. For example: ```python from datasets import load_dataset_builder c4_builder = load_dataset_builder("c4", "en") print(c4_builder.cache_dir) # /Users/quentinlhoest/.cache/huggingface/datasets/c4/en-174d3b7155eb68db/0.0.0/... # instead of # /Users/quentinlhoest/.cache/huggingface/datasets/c4/en/0.0.0/... ``` This issue could be annoying since it would simply ignore old cache directories for users, and regenerate datasets cc @stas00 this is what you experienced a few days ago
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2683/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2683/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2682
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2682/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2682/comments
https://api.github.com/repos/huggingface/datasets/issues/2682/events
https://github.com/huggingface/datasets/pull/2682
948,713,137
MDExOlB1bGxSZXF1ZXN0NjkzNTE2NjU2
2,682
Fix c4 expected files
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2021-07-20T14:29:31Z"
"2021-07-20T14:38:11Z"
"2021-07-20T14:38:10Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2682.diff", "html_url": "https://github.com/huggingface/datasets/pull/2682", "merged_at": "2021-07-20T14:38:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/2682.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2682" }
Some files were not registered in the list of expected files to download Fix https://github.com/huggingface/datasets/issues/2677
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2682/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2682/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2681/comments
https://api.github.com/repos/huggingface/datasets/issues/2681/events
https://github.com/huggingface/datasets/issues/2681
948,708,645
MDU6SXNzdWU5NDg3MDg2NDU=
2,681
5 duplicate datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
2
"2021-07-20T14:25:00Z"
"2021-07-20T15:44:17Z"
"2021-07-20T15:44:17Z"
CONTRIBUTOR
null
null
null
## Describe the bug In 5 cases, I could find a dataset on Paperswithcode which references two Hugging Face datasets as dataset loaders. They are: - https://paperswithcode.com/dataset/multinli -> https://huggingface.co/datasets/multi_nli and https://huggingface.co/datasets/multi_nli_mismatch <img width="838" alt="Capture d’écran 2021-07-20 aΜ€ 16 33 58" src="https://user-images.githubusercontent.com/1676121/126342757-4625522a-f788-41a3-bd1f-2a8b9817bbf5.png"> - https://paperswithcode.com/dataset/squad -> https://huggingface.co/datasets/squad and https://huggingface.co/datasets/squad_v2 - https://paperswithcode.com/dataset/narrativeqa -> https://huggingface.co/datasets/narrativeqa and https://huggingface.co/datasets/narrativeqa_manual - https://paperswithcode.com/dataset/hate-speech-and-offensive-language -> https://huggingface.co/datasets/hate_offensive and https://huggingface.co/datasets/hate_speech_offensive - https://paperswithcode.com/dataset/newsph-nli -> https://huggingface.co/datasets/newsph and https://huggingface.co/datasets/newsph_nli Possible solutions: - don't fix (it works) - for each pair of duplicate datasets, remove one, and create an alias to the other. ## Steps to reproduce the bug Visit the Paperswithcode links, and look at the "Dataset Loaders" section ## Expected results There should only be one reference to a Hugging Face dataset loader ## Actual results Two Hugging Face dataset loaders
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2681/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2681/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2680
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2680/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2680/comments
https://api.github.com/repos/huggingface/datasets/issues/2680/events
https://github.com/huggingface/datasets/pull/2680
948,649,716
MDExOlB1bGxSZXF1ZXN0NjkzNDYyNzY3
2,680
feat: 🎸 add paperswithcode id for qasper dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
closed
false
null
[]
null
0
"2021-07-20T13:22:29Z"
"2021-07-20T14:04:10Z"
"2021-07-20T14:04:10Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2680.diff", "html_url": "https://github.com/huggingface/datasets/pull/2680", "merged_at": "2021-07-20T14:04:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/2680.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2680" }
The reverse reference exists on paperswithcode: https://paperswithcode.com/dataset/qasper
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2680/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2680/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2679/comments
https://api.github.com/repos/huggingface/datasets/issues/2679/events
https://github.com/huggingface/datasets/issues/2679
948,506,638
MDU6SXNzdWU5NDg1MDY2Mzg=
2,679
Cannot load the blog_authorship_corpus due to codec errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/38069449?v=4", "events_url": "https://api.github.com/users/izaskr/events{/privacy}", "followers_url": "https://api.github.com/users/izaskr/followers", "following_url": "https://api.github.com/users/izaskr/following{/other_user}", "gists_url": "https://api.github.com/users/izaskr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/izaskr", "id": 38069449, "login": "izaskr", "node_id": "MDQ6VXNlcjM4MDY5NDQ5", "organizations_url": "https://api.github.com/users/izaskr/orgs", "received_events_url": "https://api.github.com/users/izaskr/received_events", "repos_url": "https://api.github.com/users/izaskr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/izaskr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/izaskr/subscriptions", "type": "User", "url": "https://api.github.com/users/izaskr" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
3
"2021-07-20T10:13:20Z"
"2021-07-21T17:02:21Z"
"2021-07-21T13:11:58Z"
NONE
null
null
null
## Describe the bug A codec error is raised while loading the blog_authorship_corpus. ## Steps to reproduce the bug ``` from datasets import load_dataset raw_datasets = load_dataset("blog_authorship_corpus") ``` ## Expected results Loading the dataset without errors. ## Actual results An error similar to the one below was raised for (what seems like) every XML file. /home/izaskr/.cache/huggingface/datasets/downloads/extracted/7cf52524f6517e168604b41c6719292e8f97abbe8f731e638b13423f4212359a/blogs/788358.male.24.Arts.Libra.xml cannot be loaded. Error message: 'utf-8' codec can't decode byte 0xe7 in position 7551: invalid continuation byte Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/load.py", line 856, in load_dataset builder_instance.download_and_prepare( File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/builder.py", line 583, in download_and_prepare self._download_and_prepare( File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/builder.py", line 671, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/izaskr/anaconda3/envs/local_vae_older/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}] ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2679/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2679/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2678/comments
https://api.github.com/repos/huggingface/datasets/issues/2678/events
https://github.com/huggingface/datasets/issues/2678
948,471,222
MDU6SXNzdWU5NDg0NzEyMjI=
2,678
Import Error in Kaggle notebook
{ "avatar_url": "https://avatars.githubusercontent.com/u/47216475?v=4", "events_url": "https://api.github.com/users/prikmm/events{/privacy}", "followers_url": "https://api.github.com/users/prikmm/followers", "following_url": "https://api.github.com/users/prikmm/following{/other_user}", "gists_url": "https://api.github.com/users/prikmm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/prikmm", "id": 47216475, "login": "prikmm", "node_id": "MDQ6VXNlcjQ3MjE2NDc1", "organizations_url": "https://api.github.com/users/prikmm/orgs", "received_events_url": "https://api.github.com/users/prikmm/received_events", "repos_url": "https://api.github.com/users/prikmm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/prikmm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prikmm/subscriptions", "type": "User", "url": "https://api.github.com/users/prikmm" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
4
"2021-07-20T09:28:38Z"
"2021-07-21T13:59:26Z"
"2021-07-21T13:03:02Z"
NONE
null
null
null
## Describe the bug Not able to import datasets library in kaggle notebooks ## Steps to reproduce the bug ```python !pip install datasets import datasets ``` ## Expected results No such error ## Actual results ``` ImportError Traceback (most recent call last) <ipython-input-9-652e886d387f> in <module> ----> 1 import datasets /opt/conda/lib/python3.7/site-packages/datasets/__init__.py in <module> 31 ) 32 ---> 33 from .arrow_dataset import Dataset, concatenate_datasets 34 from .arrow_reader import ArrowReader, ReadInstruction 35 from .arrow_writer import ArrowWriter /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in <module> 36 import pandas as pd 37 import pyarrow as pa ---> 38 import pyarrow.compute as pc 39 from multiprocess import Pool, RLock 40 from tqdm.auto import tqdm /opt/conda/lib/python3.7/site-packages/pyarrow/compute.py in <module> 16 # under the License. 17 ---> 18 from pyarrow._compute import ( # noqa 19 Function, 20 FunctionOptions, ImportError: /opt/conda/lib/python3.7/site-packages/pyarrow/_compute.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK5arrow7compute15KernelSignature8ToStringEv ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Kaggle - Python version: 3.7.10 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2678/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2678/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2677/comments
https://api.github.com/repos/huggingface/datasets/issues/2677/events
https://github.com/huggingface/datasets/issues/2677
948,429,788
MDU6SXNzdWU5NDg0Mjk3ODg=
2,677
Error when downloading C4
{ "avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4", "events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}", "followers_url": "https://api.github.com/users/Aktsvigun/followers", "following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}", "gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aktsvigun", "id": 36672861, "login": "Aktsvigun", "node_id": "MDQ6VXNlcjM2NjcyODYx", "organizations_url": "https://api.github.com/users/Aktsvigun/orgs", "received_events_url": "https://api.github.com/users/Aktsvigun/received_events", "repos_url": "https://api.github.com/users/Aktsvigun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions", "type": "User", "url": "https://api.github.com/users/Aktsvigun" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
3
"2021-07-20T08:37:30Z"
"2021-07-20T14:41:31Z"
"2021-07-20T14:38:10Z"
NONE
null
null
null
Hi, I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive: `datasets.load_dataset('c4', 'en')` Is this a bug or do I have some configurations missing on my server? Thanks! <img width="1014" alt="Π‘Π½ΠΈΠΌΠΎΠΊ экрана 2021-07-20 Π² 11 37 17" src="https://user-images.githubusercontent.com/36672861/126289448-6e0db402-5f3f-485a-bf74-eb6e0271fc25.png">
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2677/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2677/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2676
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2676/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2676/comments
https://api.github.com/repos/huggingface/datasets/issues/2676/events
https://github.com/huggingface/datasets/pull/2676
947,734,909
MDExOlB1bGxSZXF1ZXN0NjkyNjc2NTg5
2,676
Increase json reader block_size automatically
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2021-07-19T14:51:14Z"
"2021-07-19T17:51:39Z"
"2021-07-19T17:51:38Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2676.diff", "html_url": "https://github.com/huggingface/datasets/pull/2676", "merged_at": "2021-07-19T17:51:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/2676.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2676" }
Currently some files can't be read with the default parameters of the JSON lines reader. For example this one: https://huggingface.co/datasets/thomwolf/codeparrot/resolve/main/file-000000000006.json.gz raises a pyarrow error: ```python ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ``` The block size that is used is the default one by pyarrow (related to this [jira issue](https://issues.apache.org/jira/browse/ARROW-9612)). To fix this issue I changed the block_size to increase automatically if there is a straddling issue when parsing a batch of json lines. By default the value is `chunksize // 32` in order to leverage multithreading, and it doubles every time a straddling issue occurs. The block_size is then reset for each file. cc @thomwolf @albertvillanova
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2676/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2676/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2675/comments
https://api.github.com/repos/huggingface/datasets/issues/2675/events
https://github.com/huggingface/datasets/pull/2675
947,657,732
MDExOlB1bGxSZXF1ZXN0NjkyNjEwNTA1
2,675
Parallelize ETag requests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2021-07-19T13:30:42Z"
"2021-07-19T19:33:25Z"
"2021-07-19T19:33:25Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2675.diff", "html_url": "https://github.com/huggingface/datasets/pull/2675", "merged_at": "2021-07-19T19:33:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/2675.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2675" }
Since https://github.com/huggingface/datasets/pull/2628 we use the ETag or the remote data files to compute the directory in the cache where a dataset is saved. This is useful in order to reload the dataset from the cache only if the remote files haven't changed. In this I made the ETag requests parallel using multithreading. There is also a tqdm progress bar that shows up if there are more than 16 data files.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2675/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2675/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2674
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2674/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2674/comments
https://api.github.com/repos/huggingface/datasets/issues/2674/events
https://github.com/huggingface/datasets/pull/2674
947,338,202
MDExOlB1bGxSZXF1ZXN0NjkyMzMzODU3
2,674
Fix sacrebleu parameter name
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-07-19T07:07:26Z"
"2021-07-19T08:07:03Z"
"2021-07-19T08:07:03Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2674.diff", "html_url": "https://github.com/huggingface/datasets/pull/2674", "merged_at": "2021-07-19T08:07:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/2674.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2674" }
DONE: - Fix parameter name: `smooth` to `smooth_method`. - Improve kwargs description. - Align docs on using a metric. - Add example of passing additional arguments in using metrics. Related to #2669.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2674/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2674/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2673/comments
https://api.github.com/repos/huggingface/datasets/issues/2673/events
https://github.com/huggingface/datasets/pull/2673
947,300,008
MDExOlB1bGxSZXF1ZXN0NjkyMzAxMTgw
2,673
Fix potential DuplicatedKeysError in SQuAD
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-07-19T06:08:00Z"
"2021-07-19T07:08:03Z"
"2021-07-19T07:08:03Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2673.diff", "html_url": "https://github.com/huggingface/datasets/pull/2673", "merged_at": "2021-07-19T07:08:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/2673.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2673" }
DONE: - Fix potential DiplicatedKeysError by ensuring keys are unique. - Align examples in the docs with SQuAD code. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2673/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2673/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2672
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2672/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2672/comments
https://api.github.com/repos/huggingface/datasets/issues/2672/events
https://github.com/huggingface/datasets/pull/2672
947,294,605
MDExOlB1bGxSZXF1ZXN0NjkyMjk2NDQ4
2,672
Fix potential DuplicatedKeysError in LibriSpeech
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
"2021-07-19T06:00:49Z"
"2021-07-19T06:28:57Z"
"2021-07-19T06:28:56Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2672.diff", "html_url": "https://github.com/huggingface/datasets/pull/2672", "merged_at": "2021-07-19T06:28:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/2672.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2672" }
DONE: - Fix unnecessary path join. - Fix potential DiplicatedKeysError by ensuring keys are unique. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2672/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2672/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2671/comments
https://api.github.com/repos/huggingface/datasets/issues/2671/events
https://github.com/huggingface/datasets/pull/2671
947,273,875
MDExOlB1bGxSZXF1ZXN0NjkyMjc5MTM0
2,671
Mesinesp development and training data sets have been added.
{ "avatar_url": "https://avatars.githubusercontent.com/u/32900185?v=4", "events_url": "https://api.github.com/users/aslihanuysall/events{/privacy}", "followers_url": "https://api.github.com/users/aslihanuysall/followers", "following_url": "https://api.github.com/users/aslihanuysall/following{/other_user}", "gists_url": "https://api.github.com/users/aslihanuysall/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aslihanuysall", "id": 32900185, "login": "aslihanuysall", "node_id": "MDQ6VXNlcjMyOTAwMTg1", "organizations_url": "https://api.github.com/users/aslihanuysall/orgs", "received_events_url": "https://api.github.com/users/aslihanuysall/received_events", "repos_url": "https://api.github.com/users/aslihanuysall/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aslihanuysall/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aslihanuysall/subscriptions", "type": "User", "url": "https://api.github.com/users/aslihanuysall" }
[]
closed
false
null
[]
null
1
"2021-07-19T05:14:38Z"
"2021-07-19T07:32:28Z"
"2021-07-19T06:45:50Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2671.diff", "html_url": "https://github.com/huggingface/datasets/pull/2671", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2671.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2671" }
https://zenodo.org/search?page=1&size=20&q=mesinesp, Mesinesp has Medical Semantic Indexed records in Spanish. Indexing is done using DeCS codes, a sort of Spanish equivalent to MeSH terms. The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) development set has a total of 750 records. The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) training set has a total of 369,368 records.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2671/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2671/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2670/comments
https://api.github.com/repos/huggingface/datasets/issues/2670/events
https://github.com/huggingface/datasets/issues/2670
947,120,709
MDU6SXNzdWU5NDcxMjA3MDk=
2,670
Using sharding to parallelize indexing
{ "avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4", "events_url": "https://api.github.com/users/ggdupont/events{/privacy}", "followers_url": "https://api.github.com/users/ggdupont/followers", "following_url": "https://api.github.com/users/ggdupont/following{/other_user}", "gists_url": "https://api.github.com/users/ggdupont/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ggdupont", "id": 5583410, "login": "ggdupont", "node_id": "MDQ6VXNlcjU1ODM0MTA=", "organizations_url": "https://api.github.com/users/ggdupont/orgs", "received_events_url": "https://api.github.com/users/ggdupont/received_events", "repos_url": "https://api.github.com/users/ggdupont/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ggdupont/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ggdupont/subscriptions", "type": "User", "url": "https://api.github.com/users/ggdupont" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
0
"2021-07-18T21:26:26Z"
"2021-10-07T13:33:25Z"
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** Creating an elasticsearch index on large dataset could be quite long and cannot be parallelized on shard (the index creation is colliding) **Describe the solution you'd like** When working on dataset shards, if an index already exists, its mapping should be checked and if compatible, the indexing process should continue with the shard data. Additionally, at the end of the process, the `_indexes` dict should be send back to the original dataset object (from which the shards have been created) to allow to use the index for later filtering on the whole dataset. **Describe alternatives you've considered** Each dataset shard could created independent partial indices. then on the whole dataset level, indices should be all referred in `_indexes` dict and be used in querying through `get_nearest_examples()`. The drawback is that the scores will be computed independently on the partial indices leading to inconsistent values for most scoring based on corpus level statistics (tf/idf, BM25). **Additional context** The objectives is to parallelize the index creation to speed-up the process (ie surcharging the ES server which is fine to handle large load) while later enabling search on the whole dataset.
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/2670/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2670/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2669/comments
https://api.github.com/repos/huggingface/datasets/issues/2669/events
https://github.com/huggingface/datasets/issues/2669
946,982,998
MDU6SXNzdWU5NDY5ODI5OTg=
2,669
Metric kwargs are not passed to underlying external metric f1_score
{ "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BramVanroy", "id": 2779410, "login": "BramVanroy", "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "repos_url": "https://api.github.com/users/BramVanroy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "type": "User", "url": "https://api.github.com/users/BramVanroy" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
2
"2021-07-18T08:32:31Z"
"2021-07-18T18:36:05Z"
"2021-07-18T11:19:04Z"
CONTRIBUTOR
null
null
null
## Describe the bug When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so. ## Steps to reproduce the bug ```python import datasets f1 = datasets.load_metric("f1", keep_in_memory=True, average="min") f1.add_batch(predictions=[0,2,3], references=[1, 2, 3]) f1.compute() ``` ## Expected results No error, because `average="min"` should be passed correctly to f1_score in sklearn. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\datasets\metric.py", line 402, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "C:\Users\bramv\.cache\huggingface\modules\datasets_modules\metrics\f1\82177930a325d4c28342bba0f116d73f6d92fb0c44cd67be32a07c1262b61cfe\f1.py", line 97, in _compute "f1": f1_score( File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1071, in f1_score return fbeta_score(y_true, y_pred, beta=1, labels=labels, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1195, in fbeta_score _, _, f, _ = precision_recall_fscore_support(y_true, y_pred, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\utils\validation.py", line 63, in inner_f return f(*args, **kwargs) File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1464, in precision_recall_fscore_support labels = _check_set_wise_labels(y_true, y_pred, average, labels, File "C:\Users\bramv\.virtualenvs\pipeline-TpEsXVex\lib\site-packages\sklearn\metrics\_classification.py", line 1294, in _check_set_wise_labels raise ValueError("Target is %s but average='binary'. Please " ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted']. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.2 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2669/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2669/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2668/comments
https://api.github.com/repos/huggingface/datasets/issues/2668/events
https://github.com/huggingface/datasets/pull/2668
946,867,622
MDExOlB1bGxSZXF1ZXN0NjkxOTY1MTY1
2,668
Add Russian SuperGLUE
{ "avatar_url": "https://avatars.githubusercontent.com/u/44175589?v=4", "events_url": "https://api.github.com/users/slowwavesleep/events{/privacy}", "followers_url": "https://api.github.com/users/slowwavesleep/followers", "following_url": "https://api.github.com/users/slowwavesleep/following{/other_user}", "gists_url": "https://api.github.com/users/slowwavesleep/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/slowwavesleep", "id": 44175589, "login": "slowwavesleep", "node_id": "MDQ6VXNlcjQ0MTc1NTg5", "organizations_url": "https://api.github.com/users/slowwavesleep/orgs", "received_events_url": "https://api.github.com/users/slowwavesleep/received_events", "repos_url": "https://api.github.com/users/slowwavesleep/repos", "site_admin": false, "starred_url": "https://api.github.com/users/slowwavesleep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slowwavesleep/subscriptions", "type": "User", "url": "https://api.github.com/users/slowwavesleep" }
[]
closed
false
null
[]
null
2
"2021-07-17T17:41:28Z"
"2021-07-29T11:50:31Z"
"2021-07-29T11:50:31Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2668.diff", "html_url": "https://github.com/huggingface/datasets/pull/2668", "merged_at": "2021-07-29T11:50:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/2668.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2668" }
Hi, This adds the [Russian SuperGLUE](https://russiansuperglue.com/) dataset. For the most part I reused the code for the original SuperGLUE, although there are some relatively minor differences in the structure that I accounted for.
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2668/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2668/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2667
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2667/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2667/comments
https://api.github.com/repos/huggingface/datasets/issues/2667/events
https://github.com/huggingface/datasets/pull/2667
946,861,908
MDExOlB1bGxSZXF1ZXN0NjkxOTYwNzc3
2,667
Use tqdm from tqdm_utils
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
2
"2021-07-17T17:06:35Z"
"2021-07-19T17:39:10Z"
"2021-07-19T17:32:00Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2667.diff", "html_url": "https://github.com/huggingface/datasets/pull/2667", "merged_at": "2021-07-19T17:32:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/2667.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2667" }
This PR replaces `tqdm` from the `tqdm` lib with `tqdm` from `datasets.utils.tqdm_utils`. With this change, it's possible to disable progress bars just by calling `disable_progress_bar`. Note this doesn't work on Windows when using multiprocessing due to how global variables are shared between processes. Currently, there is no easy way to disable progress bars in a multiprocess setting on Windows (patching logging with `datasets.utils.logging.get_verbosity = lambda: datasets.utils.logging.NOTSET` doesn't seem to work as well), so adding support for this is a future goal. Additionally, this PR adds a unit ("ba" for batches) to the bar printed by `Dataset.to_json` (this change is motivated by https://github.com/huggingface/datasets/issues/2657).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2667/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2667/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2666
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2666/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2666/comments
https://api.github.com/repos/huggingface/datasets/issues/2666/events
https://github.com/huggingface/datasets/pull/2666
946,825,140
MDExOlB1bGxSZXF1ZXN0NjkxOTMzMDM1
2,666
Adds CodeClippy dataset [WIP]
{ "avatar_url": "https://avatars.githubusercontent.com/u/69807323?v=4", "events_url": "https://api.github.com/users/arampacha/events{/privacy}", "followers_url": "https://api.github.com/users/arampacha/followers", "following_url": "https://api.github.com/users/arampacha/following{/other_user}", "gists_url": "https://api.github.com/users/arampacha/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/arampacha", "id": 69807323, "login": "arampacha", "node_id": "MDQ6VXNlcjY5ODA3MzIz", "organizations_url": "https://api.github.com/users/arampacha/orgs", "received_events_url": "https://api.github.com/users/arampacha/received_events", "repos_url": "https://api.github.com/users/arampacha/repos", "site_admin": false, "starred_url": "https://api.github.com/users/arampacha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arampacha/subscriptions", "type": "User", "url": "https://api.github.com/users/arampacha" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
2
"2021-07-17T13:32:04Z"
"2023-07-26T23:06:01Z"
"2022-10-03T09:37:35Z"
NONE
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/2666.diff", "html_url": "https://github.com/huggingface/datasets/pull/2666", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2666.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2666" }
CodeClippy is an opensource code dataset scrapped from github during flax-jax-community-week https://the-eye.eu/public/AI/training_data/code_clippy_data/
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2666/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2666/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2665/comments
https://api.github.com/repos/huggingface/datasets/issues/2665/events
https://github.com/huggingface/datasets/pull/2665
946,822,036
MDExOlB1bGxSZXF1ZXN0NjkxOTMwNjky
2,665
Adds APPS dataset to the hub [WIP]
{ "avatar_url": "https://avatars.githubusercontent.com/u/69807323?v=4", "events_url": "https://api.github.com/users/arampacha/events{/privacy}", "followers_url": "https://api.github.com/users/arampacha/followers", "following_url": "https://api.github.com/users/arampacha/following{/other_user}", "gists_url": "https://api.github.com/users/arampacha/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/arampacha", "id": 69807323, "login": "arampacha", "node_id": "MDQ6VXNlcjY5ODA3MzIz", "organizations_url": "https://api.github.com/users/arampacha/orgs", "received_events_url": "https://api.github.com/users/arampacha/received_events", "repos_url": "https://api.github.com/users/arampacha/repos", "site_admin": false, "starred_url": "https://api.github.com/users/arampacha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arampacha/subscriptions", "type": "User", "url": "https://api.github.com/users/arampacha" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
1
"2021-07-17T13:13:17Z"
"2022-10-03T09:38:10Z"
"2022-10-03T09:38:10Z"
NONE
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/2665.diff", "html_url": "https://github.com/huggingface/datasets/pull/2665", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2665.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2665" }
A loading script for [APPS dataset](https://github.com/hendrycks/apps)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2665/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2665/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2663/comments
https://api.github.com/repos/huggingface/datasets/issues/2663/events
https://github.com/huggingface/datasets/issues/2663
946,552,273
MDU6SXNzdWU5NDY1NTIyNzM=
2,663
[`to_json`] add multi-proc sharding support
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
2
"2021-07-16T19:41:50Z"
"2021-09-13T13:56:37Z"
"2021-09-13T13:56:37Z"
CONTRIBUTOR
null
null
null
As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR. I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally in `to_json` via `num_proc` argument. I guess `num_proc` will be the number of shards? I think the user will need to use this feature wisely, since too many processes writing to say normal style HD is likely to be slower than one process. I'm not sure whether the user should be responsible to concatenate the shards at the end or `datasets`, either way works for my needs. The code I was using: ``` from multiprocessing import cpu_count, Process, Queue [...] filtered_dataset = concat_dataset.map(filter_short_documents, batched=True, batch_size=256, num_proc=cpu_count()) DATASET_NAME = "oscar" SHARDS = 10 def process_shard(idx): print(f"Sharding {idx}") ds_shard = filtered_dataset.shard(SHARDS, idx, contiguous=True) # ds_shard = ds_shard.shuffle() # remove contiguous=True above if shuffling print(f"Saving {DATASET_NAME}-{idx}.jsonl") ds_shard.to_json(f"{DATASET_NAME}-{idx}.jsonl", orient="records", lines=True, force_ascii=False) queue = Queue() processes = [Process(target=process_shard, args=(idx,)) for idx in range(SHARDS)] for p in processes: p.start() for p in processes: p.join() ``` Thank you! @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2663/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2663/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2662/comments
https://api.github.com/repos/huggingface/datasets/issues/2662/events
https://github.com/huggingface/datasets/pull/2662
946,470,815
MDExOlB1bGxSZXF1ZXN0NjkxNjM5MjU5
2,662
Load Dataset from the Hub (NO DATASET SCRIPT)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
5
"2021-07-16T17:21:58Z"
"2021-08-25T14:53:01Z"
"2021-08-25T14:18:08Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2662.diff", "html_url": "https://github.com/huggingface/datasets/pull/2662", "merged_at": "2021-08-25T14:18:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/2662.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2662" }
## Load the data from any Dataset repository on the Hub This PR adds support for loading datasets from any dataset repository on the hub, without requiring any dataset script. As a user it's now possible to create a repo and upload some csv/json/text/parquet files, and then be able to load the data in one line. Here is an example with the `allenai/c4` repository that contains a lot of compressed json lines files: ```python from datasets import load_dataset data_files = {"train": "en/c4-train.*.json.gz"} c4 = load_dataset("allenai/c4", data_files=data_files, split="train", streaming=True) print(c4.n_shards) # 1024 print(next(iter(c4))) # {'text': 'Beginners BBQ Class Takin...'} ``` By default it loads all the files, but as shown in the example you can choose the ones you want with unix style patterns. Of course it's still possible to use dataset scripts since they offer the most flexibility. ## Implementation details It uses `huggingface_hub` to list the files in a dataset repository. If you provide a path to a local directory instead of a repository name, it works the same way but it uses `glob`. Depending on the data files available, or passed in the `data_files` parameter, one of the available builders will be used among the csv, json, text and parquet builders. Because of this, it's not possible to load both csv and json files at once. In this case you have to load them separately and then concatenate the two datasets for example. ## TODO - [x] tests - [x] docs - [x] when huggingface_hub gets a new release, update the CI and the setup.py Close https://github.com/huggingface/datasets/issues/2629
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 5, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/2662/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2662/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2661
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2661/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2661/comments
https://api.github.com/repos/huggingface/datasets/issues/2661/events
https://github.com/huggingface/datasets/pull/2661
946,446,967
MDExOlB1bGxSZXF1ZXN0NjkxNjE5MzAz
2,661
Add SD task for SUPERB
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
11
"2021-07-16T16:43:21Z"
"2021-08-04T17:03:53Z"
"2021-08-04T17:03:53Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2661.diff", "html_url": "https://github.com/huggingface/datasets/pull/2661", "merged_at": "2021-08-04T17:03:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/2661.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2661" }
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). TODO: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Upload these files to the superb-data repo - [x] Transcribe the corresponding s3prl processing of these files into our superb loading script - [x] README: tags + description sections - ~~Add DER metric~~ (we leave the DER metric for a follow-up PR) Related to #2619. Close #2653. cc: @lewtun
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2661/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2661/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2660/comments
https://api.github.com/repos/huggingface/datasets/issues/2660/events
https://github.com/huggingface/datasets/pull/2660
946,316,180
MDExOlB1bGxSZXF1ZXN0NjkxNTA4NzE0
2,660
Move checks from _map_single to map
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
3
"2021-07-16T13:53:33Z"
"2021-09-06T14:12:23Z"
"2021-09-06T14:12:23Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2660.diff", "html_url": "https://github.com/huggingface/datasets/pull/2660", "merged_at": "2021-09-06T14:12:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/2660.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2660" }
The goal of this PR is to remove duplicated checks in the `map` logic to execute them only once whenever possible (`fn_kwargs`, `input_columns`, ...). Additionally, this PR improves the consistency (to align it with `input_columns`) of the `remove_columns` check by adding support for a single string value, which is then wrapped into a list.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2660/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2660/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2659/comments
https://api.github.com/repos/huggingface/datasets/issues/2659/events
https://github.com/huggingface/datasets/pull/2659
946,155,407
MDExOlB1bGxSZXF1ZXN0NjkxMzcwNzU3
2,659
Allow dataset config kwargs to be None
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
0
"2021-07-16T10:25:38Z"
"2021-07-16T12:46:07Z"
"2021-07-16T12:46:07Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2659.diff", "html_url": "https://github.com/huggingface/datasets/pull/2659", "merged_at": "2021-07-16T12:46:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/2659.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2659" }
Close https://github.com/huggingface/datasets/issues/2658 The dataset config kwargs that were set to None we simply ignored. This was an issue when None has some meaning for certain parameters of certain builders, like the `sep` parameter of the "csv" builder that allows to infer to separator. cc @SBrandeis
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2659/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2659/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2658/comments
https://api.github.com/repos/huggingface/datasets/issues/2658/events
https://github.com/huggingface/datasets/issues/2658
946,139,532
MDU6SXNzdWU5NDYxMzk1MzI=
2,658
Can't pass `sep=None` to load_dataset("csv", ...) to infer the separator via pandas.read_csv
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
0
"2021-07-16T10:05:44Z"
"2021-07-16T12:46:06Z"
"2021-07-16T12:46:06Z"
MEMBER
null
null
null
When doing `load_dataset("csv", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=","` instead, which makes it impossible to make the csv loader infer the separator. Related to https://github.com/huggingface/datasets/pull/2656 cc @SBrandeis
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2658/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2658/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2657
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2657/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2657/comments
https://api.github.com/repos/huggingface/datasets/issues/2657/events
https://github.com/huggingface/datasets/issues/2657
945,822,829
MDU6SXNzdWU5NDU4MjI4Mjk=
2,657
`to_json` reporting enhancements
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
0
"2021-07-15T23:32:18Z"
"2021-07-15T23:33:53Z"
null
CONTRIBUTOR
null
null
null
While using `to_json` 2 things came to mind that would have made the experience easier on the user: 1. Could we have a `desc` arg for the tqdm use and a fallback to just `to_json` so that it'd be clear to the user what's happening? Surely, one can just print the description before calling json, but I thought perhaps it'd help to have it self-identify like you did for other progress bars recently. 2. It took me a while to make sense of the reported numbers: ``` 22%|β–ˆβ–ˆβ– | 1536/7076 [12:30:57<44:09:42, 28.70s/it] ``` So iteration here happens to be 10K samples, and the total is 70M records. But the user does't know that, so the progress bar is perfect, but the numbers it reports are meaningless until one discovers that 1it=10K samples. And one still has to convert these in the head - so it's not quick. Not exactly sure what's the best way to approach this, perhaps it can be part of `desc`? or report M or K, so it'd be built-in if it were to print, e.g.: ``` 22%|β–ˆβ–ˆβ– | 15360K/70760K [12:30:57<44:09:42, 28.70s/it] ``` or ``` 22%|β–ˆβ–ˆβ– | 15.36M/70.76M [12:30:57<44:09:42, 28.70s/it] ``` (while of course remaining friendly to small datasets) I forget if tqdm lets you add a magnitude identifier to the running count. Thank you!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2657/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2657/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2656/comments
https://api.github.com/repos/huggingface/datasets/issues/2656/events
https://github.com/huggingface/datasets/pull/2656
945,421,790
MDExOlB1bGxSZXF1ZXN0NjkwNzUzNjA3
2,656
Change `from_csv` default arguments
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
[]
closed
false
null
[]
null
1
"2021-07-15T14:09:06Z"
"2023-09-24T09:56:44Z"
"2021-07-16T10:23:26Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2656.diff", "html_url": "https://github.com/huggingface/datasets/pull/2656", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2656.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2656" }
Passing `sep=None` to pandas's `read_csv` lets pandas guess the CSV file's separator This PR allows users to use this pandas's feature by passing `sep=None` to `Dataset.from_csv`: ```python Dataset.from_csv( ..., sep=None ) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2656/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2656/timeline
null
null
true